entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 17
188
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 2
629k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.05396v1 | 20230711155715 | Handwritten Text Recognition Using Convolutional Neural Network | [
"Atman Mishra",
"A. Sharath Ram",
"Kavyashree C"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
Handwritten Text Recognition Using Convolutional Neural Network
Atman Mishra
Dept. of AIML
New Horizon College of Engineering
Bangalore, India
[email protected]
A. Sharath Ram
Dept. of AIML
New Horizon College of Engineering
Bangalore, India
[email protected]
Kavyashree C.
Dept. of AIML
New Horizon College of Engineering
Bangalore, India
[email protected]
Received / Accepted
==================================================================================================================================================================================================================================================================================================================================================================
OCR (Optical Character Recognition) is a technology that offers comprehensive alphanumeric recognition of handwritten and printed characters at electronic speed by merely scanning the document. Recently, the understanding of visual data has been termed Intelligent Character Recognition (ICR). Intelligent Character Recognition (ICR) is the OCR module that can convert scans of handwritten or printed characters into ASCII text. ASCII data is the standard format for data encoding in electronic communication. ASCII assigns standard numeric values to letters, numeral, symbols, white-spaces and other characters. In more technical terms, OCR is the process of using an electronic device to transform 2-Dimensional textual information into machine-encoded text. Anything that contains text both machine written or handwritten can be scanned either through a scanner or just simply a picture of the text is enough for the recognition system to distinguish the text.
The goal of this papers is to show the results of a Convolutional Neural Network model which has been trained on National Institute of Science and Technology (NIST) dataset containing over a 100,000 images. The network learns from the features extracted from the images and use it to generate the probability of each class to which the picture belongs to. We have achieved an accuracy of 90.54% with a loss of 2.53%.
Neural Networks, OCR, Convolution, Pooling, Regularisation, Pre-processing Output Layers
§ INTRODUCTION
In the past three decades, much work has been devoted to handwritten text recognition, which is used to convert human-readable handwritten language into machine-readable codes. Handwritten text recognition has attracted a great deal of interest because it provides a method for automatically processing enormous quantities of handwritten data in a variety of scientific and business applications. The underlying problem with handwritten text has been that various individuals' representations of the same character are not identical. An additional difficulty experienced while attempting to decipher English handwritten characters is the variance in personal writing styles and situational differences in a person's writing style. In addition, the writer's disposition and writing environment may influence writing styles.
Attempts to create a computer system that could understand handwriting are the only time the complexity of optical pattern recognition becomes apparent. The strategy using artificial neural networks is thought to be the most effective for creating handwriting recognition systems. Neural networks significantly help in modelling how the human brain operates when identifying handwritten language in a more efficient manner. It enables machines to interpret handwriting on par with or better than human ability. Humans use a variety of writing styles, many of which are difficult to read. Additionally, reading handwriting may be time-consuming and difficult, particularly when one is required to examine several documents with handwriting from various persons. Neural networks are the best choice for the suggested system since they can extract meaning from complicated data and spot trends that are hard to spot manually or using other methods. The primary goal of this project is to create a model based on the concept of Convolution Neural Network that can recognize handwritten digits and characters from a picture. We have built a simple Convolutional Neural Network (CNN) system which has been trained on NIST dataset.
§ RELATED WORK
Many researchers tried to develop handwritten text recognition models in the past, but none of them are perfectly accurate and it still requires much more research in this field.
M. Brisinello et al [1] proposed a pre-processing method which improves Tesseract OCR 4.0’s performance by approximately 20%. They implemented a two-step process which involves clustering of input images and a classifier which identifies whether the images contain text or not.
In [2], neural networks are used to sample the pixels in the image into a matrix and match them to a known pixel matrix. It achieved an astounding accuracy of 95.44%.
An open-source OCR engine developed by HP called Tesseract OCR Engine has been implemented in this research which has achieved an accuracy of 95.305% which can be seen in [3].
A different approach has been implemented in [4] which has used RNN and embedded system for character recognition of Devanagari which involves 120 Indian languages.
Methods that were based on confusion matrices were employed in the study that Yuan-XiangLi et al [5] conducted. The approach starts with the original candidates, utilises them to conjecture which characters are most likely to be correct, and then combines the postulated set with the original candidates to obtain a new set of candidate candidates.
In [6] by T. Sari et al., the technology of character segmentation has been incorporated in a different manner of implementation. In many OCR systems, character segmentation is a prerequisite step for character recognition. It is crucial because poorly fragmented characters are unlikely to be correctly recognised.
§ CONVOLUTIONAL NEURAL NETWORK
In the field of Deep Learning, Convolutional neural networks (CNNs) are a type of Artificial Neural Network (ANN) which are frequently used to analyse visual data, including photographs and videos, in order to discover patterns or trends in the visual data.. This type of analysis is typically carried out in order to discover new applications for the visual data. A Multilayer perceptron is typically connected in a fully connected network, and CNN is a simplified version of this type of network architecture. The Convolutional Neural Network (CNN) is made up of a variety of components and operations, such as Pooling, Convolution, and Neural Network, among others.
§.§ Neural Networks
Neural Networks or Artificial Neural Networks are computerized systems which are developed in order to replicate animal brain. ANNs are usually used to perform a certain limited task but they can be trained to perform any kind of difficult tasks which can replicate or are better than task done by humans.
Neural networks are composed of 3 layers shown in Fig 1
* Input Layer
* Hidden Layer
* Output Layer
Every artificial neuron receives an input and produces a single output that can be spread to a number of different artificial neurons. The feature values of a sample of external data, such as photographs, can serve as the inputs. Alternatively, the outputs of other neurons can serve in this capacity. The objective, such as recognising an object in an image, is completed successfully by the outputs of the final output neurons of the neural network. They are also known as Weighted Graphs rather frequently. Each connection between neurons has a weight 'W' to them along with a bias 'b' added in order to make a weighted input. After that, the weighted input will be utilised in an activation function that is present in a neuron in order to bring about non-linearity in the output of the neuron.
An activation function is a mathematical function employed in the hidden layers that uses the weighted input and bias to determine whether or not the particular neuron will be activated. The activation function also applies a non-linear transformation to the input, which enables the system to acquire new knowledge about the input. The output of the activation function will be regarded as an input to the corresponding neuron.The above mentioned process for a single neuron is given in Fig 2:
§.§ Input
A basic Convolutional Neural Network use image matrices as the input. Images are stored in a system in the form of mathematical matrices. For Example, A colored image of dimensions 1920 × 1080 is stored in the system as a 3d matrix of size (1920, 1080, 3) where, 1980 is width, 1080 is height and 3 represents the number of color channels i.e. RGB. Each cell in the image matrix contains the RGB value of the corresponding original image as shown in Fig 3. The image matrix is given as an input in the neural network for further processing which involves steps such as feature extraction using convolution and pooling for faster processing and other similar processes.
§.§ Convolution
The mathematical process known as convolution is performed on two functions in order to generate a third function that expresses how the shape of one function affects the shape of the second function. In other terms, "convolution" refers to the act of multiplying two functions point-wise to create a third function. In this case, one of the functions is the image pixels matrix, and the other is the filter. After a convolution operation has been performed on an input image matrix, a filter is a tiny matrix that is used to extract features from the input image matrix. Feature Maps is the name given to the matrix that is produced.
In the mathematical form,
y[i,j]=∑_m=-∞^∞∑_n=-∞^∞ h[m,n] · x[i-m,j-n]
where m = No. of rows, n = No. of columns,
The convolution process for matrices is shown in Fig 4.
§.§ Pooling
The process of pooling entails sliding a small two-dimensional matrix over the feature map in effort to reduce the dimensions of the map without losing the knowledge of the features that are located in that particular location. Pooling reduces the quantity of parameters that must be learned, which in turn makes the calculation more effective. Pooling can be broken down into two distinct categories: maximum pooling and average pooling. The output of the max pooling method is determined by the value that is highest in that region, whereas the output of the average pooling method is determined by the average of all the values in that region, as illustrated in Figure 5.
§.§ Regularization
Regularization is a technique used to counter the problem of decreasing the complexity of the neural network during training to avoid overfitting a model. There are three types of regularization are used L1, L2 and Dropout. We are using the Dropout regularization method which drops or turns off random nodes in the network according to some probability P. Thus reducing the complexity of the neural network avoiding overfitting. The Dropout process is given in the Fig 6.
§.§ Hidden and Output layers
Hidden layers also called Fully Connected Layers uses non-linear activation function to apply non-linear transformation on a flattened input i.e. 1-D input. It uses activation functions like Relu (Rectified Linear Unit), tanh, sigmoid, etc. The output layer uses probabilistic functions like Sigmoid or logistic, Softmax, linear to find probability for each class the network is classifying
§ SYSTEM DESIGN
§.§ Dataset
The dataset utilized in this project has been contributed by NIST (National Institute of Standards and Technology) which contains a total of 1,01,784 images for a total of 47 classes. The 47 classes include0-9, A-Z, a, b, d, e, f, g, h, n, q, r, t making a total of 47. Few of the small alphabets are considered due to the similarity between small and capital letters which will reduce the complexity for the model training. A sample from the dataset in shown in the below Figure.
§.§ Preprocessing
The images from the dataset are of dimensions 128 x 128 which is will increase the number of parameters during training exponentially resulting in longer training time. To avoid this issue, the images are rescaled to 32 x 32 and the images along with the labels are converted into two NumPy arrays respectively. The image will use less CPU and GPU resources when resized because there are fewer pixels to process. The images are already in grey-scale so we don't need to add extra filters. The resulting dimensions for a single image is 32 x 32 x 1, where 1 represents the black and white color channel.
§.§ Splitting into Training and Testing Dataset
Training and testing datasets are created from the dataset. An optimal ratio of 70:30 is employed for splitting. The training dataset contains 71,249 images and the testing dataset contains 30,535 images. Both the datasets are shuffled using a permutation to reduce variance.
§.§ CNN Model
The CNN model used for this project is quite different from the other pre-trained CNN model architecture available like ResNet, GoogleNet, etc. The CNN model used for the experiment is a implemented on a custom built architecture which is displayed in the figure <ref>.
Three Convolution and Pooling layers are used. First conv layer a total of 1024 filters of size 5 x 5. Second conv layer uses 512 filters of size 3 x 3 and Third conv layer uses 256 filters of size 3 x 3. The activation function used for these layers is ReLU(Rectified Linear Unit) which has a range from 0 -∞. A dropout regularisation layer and a flatten layer are added. In the fully connected layer, two layers containing 256 and 128 neurons respectively connected to the 47 output neurons.
§.§ Output Layer
In order to provide an accurate prediction regarding the image's category, the output layer makes use of the softmax probabilistic function. Along with the Adam optimizer, the usage of categorical cross entropy as a loss function is also common. In this case, an active learning algorithm known as the Adam Optimization method is applied. This approach employs individualised learning rates for every one of the parameters.
§.§ Train the Model
The CNN model is trained over a period of 20 epochs, with 357 steps each epoch. Figures 9 and 10 shows both the training accuracy against validation accuracy and training loss against validation loss using plots.
§ RESULTS
The use of a convolutional neural network model to detect handwritten English alphabetic and numeric characters is discussed in this research. Though many advanced techniques has been invented for this problem, CNN being the predecessor of these techniques has given satisfying results. The model which has been developed for this paper has given almost accurate predictions with an accuracy percentage of 90.54% with a loss of 2.53%. A learning rate of 0.001 is used as a parameter for a better performance.
With a few random photos of the handwritten characters, the model has been evaluated. The sample images used for the testing are shown in the figure 10.The results are shown in the above table. The model has not been perfected yet due to which, a noticeable error can be seen in the table for the class 't' which is wrongly predicted as 'T'.
P[1]>p#1
M[1]>m#1
The receiver operating characteristic curve (ROC) curve is shown in Figures 11 through 14 for a variety of classes. The performance of the classification model at different thresholds is depicted on a graph called the ROC curve, or receiver operating characteristic curve. The True Positive Rate, or recall, and False Positive Rate, or precision, are used in this curve.
TPR = TP/TP + FN
FPR = FP/FP + TN, where TP = True positive
FP = False Positive
FN = False Negative
The term "precision" refers to the percentage of cases that were correctly predicted but did not turn out to be positive. When the risk of obtaining a false positive is more significant than the risk of obtaining a false negative, precision might be helpful. The term "recall" refers to the percentage of real positive cases that our model properly predicted. This percentage is expressed as a percentage. It is a helpful statistic in situations where the risk of a false negative is more significant than the risk of a false positive. When both Precision and Recall are equal, it reaches its maximum value.
The two-dimensional area under the ROC curve is measured by the ROC curve's AUC (Area Under the Curve). The AUC offers an overall performance measurement across all categorization criteria. The classification performance for a given class curve is improved by a curve's AUC. The accuracy of a curve's categorization is improved with a lower AUC value. It is clear from the ROC curve figures that the classes "1," "I," "J," "S," "T," and "f" have low AUCs for their curves. The similarity between these characters might be the cause for this error.
§ CONCLUSION
It is possible to draw the conclusion, based on all of the evaluation metrics presented above, that Convolutional Neural Network is an effective method for solving the problem of handwritten character recognition that is also simple to implement and that produces high levels of accuracy and predictions. Although it might not be the most effective recognition algorithm available, it gets the job done nonetheless.
b1 Brisinello, Matteo and Grbic, Ratko and Pul, Matija and Andelic, Tihomir, "Improving optical character recognition performance for low quality images", 2017 International Symposium ELMAR Conference, pp. 167-171, September 2017.
b2 Rao, N. Sastry, A.S.C.S. Chakravarthy, Dr Kalyanchakravarthi, P.. (2016). Optical character recognition technique algorithms. 83. 275-282. .
b3 Ray Smith Daria Antonova Dar-Shyang Lee, "Adapting the Tesseract Open Source OCR Engine for Multilingual OCR", Proceedings of the International Workshop on Multilingual OCR (2009), 2009.
b4 S. Srivastava, A. Verma and S. Sharma, "Optical Character Recognition Techniques: A Review," 2022 IEEE International Students' Conference on Electrical, Electronics and Computer Science (SCEECS), 2022, pp. 1-6, doi: 10.1109/SCEECS54111.2022.9740911..
b5 Yuan-Xiang Li and Chew Lim Tan, "An empirical study of statistical language models for contextual post-processing of Chinese script recognition," Ninth International Workshop on Frontiers in Handwriting Recognition, 2004, pp. 257-262, doi: 10.1109/IWFHR.2004.15..
b6 Farah, N., Souici, L. Sellami, M. Arabic Word Recognition by Classifiers and Context. J Comput Sci Technol 20, 402–410 (2005). https://doi.org/10.1007/s11390-005-0402-9.
b7Chandio, A. A., Leghari, M., Hakro, D., AWAN, S., Jalbani, A. H. (2016). A Novel Approach for Online Sindhi Handwritten Word Recognition using Neural Network. Sindh University Research Journal SURJ (Science Series), 48(1).
b8Y. LeCun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard and L.D. Jackel, “Backpropagation applied to handwritten ZIP code recognition,” Neural Computations, vol. 1, issue 4, pp. 541-551, 1989.
b9W. Xue, X. Dai, and L. Liu, “Remote sensing scene classification based on multi-structure deep features fusion,” IEEE Access, vol. 8, pp. 28746-28755, 2020.
b10Yuanping Zhu, Jun Sun, and Satoshi Naoi, “Recognizing natural scene characters by convolutional neural network and bimodal image enhancement,” in Camera-Based Document Analysis and Recognition, pp. 69–82. Springer, 2012.
b11Chirag l Patel, Ripal Patel, Palak Patel, “Handwritten character recognition using neural network”,International Journal of Scientific and Research Volume 2,Issue may 2011”.
b12Husnain M, Missen MMS, Mumtaz S, Jhanidr MZ, Coustaty M, Luqman MM, et al. Recognition of Urdu Handwritten Characters Using Convolutional Neural Network. Applied Sciences. 2019;9(13):1–21.
b13Jemimah K, “Recognition of Handwritten Characters based on Deep
Learning with TensorFlow”, Research Scholar, School of Computer
Science and Engineering, Bharathidasan University, Trichy, India,
International Research Journal of Engineering and Technology
(IRJET), 2019, pp 1164- 1165.
b14Datong Chen, H. Bourlard and J. . -P. Thiran, "Text identification in complex background using SVM," Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, 2001, pp. II-II, doi: 10.1109/CVPR.2001.991021.
b15Jiangying Zhou, Daniel P. Lopresti, Tolga Tasdizen, "Finding text in color images," Proc. SPIE 3305, Document Recognition V, (1 April 1998);
b16R. Vaidya, D. Trivedi, S. Satra and P. M. Pimpale, "Handwritten Character Recognition Using Deep-Learning," 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT), 2018, pp. 772-775, doi: 10.1109/ICICCT.2018.8473291.
|
http://arxiv.org/abs/2307.04692v1 | 20230710164454 | Spoofing-Resilient LiDAR-GPS Factor Graph Localization with Chimera Authentication | [
"Adam Dai",
"Tara Minda",
"Ashwin Kanhere",
"Grace Gao"
] | eess.SP | [
"eess.SP",
"cs.RO",
"cs.SY",
"eess.SY"
] |
Spoofing-Resilient LiDAR-GPS Factor Graph Localization with Chimera Authentication
The views expressed are those of the authors and do not reflect the official guidance or position of the United States Government, the Department of
Defense or of the United States Air Force. Statement from DoD: The appearance of external hyperlinks does not constitute endorsement by the United States
Department of Defense (DoD) of the linked websites, or the information, products, or services contained therein. The DoD does not exercise any editorial,
security, or other control over the information you may find at these locations.
Adam Dai
Electrical Engineering
Stanford University
Stanford, USA
[email protected]
Tara Mina
Electrical Engineering
Stanford University
Stanford, USA
[email protected]
Ashwin Kanhere
Aeronautics and Astronautics
Stanford University
Stanford, USA
[email protected]
Grace Gao
Aeronautics and Astronautics
Stanford University
Stanford, USA
[email protected]
August 12, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
fancy
Many vehicle platforms typically use sensors such as LiDAR or camera for locally-referenced navigation with GPS for globally-referenced navigation.
However, due to the unencrypted nature of GPS signals, all civilian users are vulnerable to spoofing attacks, where a malicious spoofer broadcasts fabricated signals and causes the user to track a false position fix.
To protect against such GPS spoofing attacks, Chips-Message Robust Authentication (Chimera) has been developed and will be tested on the Navigation Technology Satellite 3 (NTS-3) satellite being launched later this year.
However, Chimera authentication is not continuously available
and may not provide sufficient protection for vehicles which rely on more frequent GPS measurements.
In this paper, we propose a factor graph-based state estimation framework which integrates LiDAR and GPS while simultaneously detecting and mitigating spoofing attacks experienced between consecutive Chimera authentications.
Our proposed framework combines GPS pseudorange measurements with LiDAR odometry to provide a robust navigation solution.
A chi-squared detector, based on pseudorange residuals, is used to detect and mitigate any potential GPS spoofing attacks.
We evaluate our method using real-world LiDAR data from the KITTI dataset and simulated GPS measurements, both nominal and with spoofing.
Across multiple trajectories and Monte Carlo runs, our method consistently achieves position errors under 5 m during nominal conditions, and successfully bounds positioning error to within odometry drift levels during spoofed conditions.
GPS, spoofing, LiDAR, sensor fusion, Chimera, factor graphs
§ INTRODUCTION
Localization is a fundamental task for vehicle-related applications, such as autonomous driving or precision farming. Currently, state-of-the-art vehicle localization relies on sensor fusion, as various sensors possess different tradeoffs and advantages. Centimeter-level localization of self-driving cars has been demonstrated with fusion of LiDAR (Light Detection and Ranging), vision, and GPS, with evaluation on a real-world fleet of cars <cit.>. In particular, LiDAR and GPS have complementary advantages. LiDAR localization and odometry works well in structured environments, but struggles in empty areas lacking spatial features. Conversely, GPS struggles in structured environments due to signal blockage and multipath, but excels in open-sky conditions.
However, GPS is vulnerable to spoofing, in which an attacker transmits fabricated GPS signals at higher power than the real signals, causing the victim to lock on to the fake signals. The attacker can then induce arbitrary errors to the victim’s GPS position estimate. For a vehicle running sensor fusion with GPS, these errors will propagate to the localization solution, compromising the safety of humans onboard or near the vehicle. Indeed, this vulnerability has been demonstrated in recent work, in which a well-designed GPS spoofing attack is able to cause an autonomous vehicle to crash with 97% success rate <cit.>.
As a countermeasure to GPS spoofing, the Air Force Research Lab (AFRL) has proposed the Chips-Message Robust Authentication (Chimera) signal enhancement for the GPS L1C signal <cit.>. The Chimera signal enhancement punctures the L1C spreading code in the pilot channel with encrypted markers, which cannot be predicted beforehand, but can be verified via a digital signature provided to the user with a short latency. For standalone receivers, authentication is available every 3 minutes through the slow channel, while users with access to secure Internet connection or an augmentation system can receive authentication every 1.5 or 6 seconds through the fast channel. The time duration between consecutive Chimera authentications is referred to as the Chimera epoch.
Nevertheless, for either the slow or fast channel, the Chimera authentication service is not continuously available. For applications such as self-driving, 1.5 seconds can easily make the difference between staying safe and crashing. Furthermore, for users relying on the slow channel, an attacker would have a large window of time to introduce spoofing errors. Prior works have addressed this issue through spoofing mitigation between Chimera authentications <cit.>. These works use IMUs (inertial measurement units) and wheel encoders as trusted (i.e. unaffected by spoofing) sensors for fusion with GPS. However, the problem of spoofing detection and mitigation has yet to be explored for LiDAR-GPS sensor fusion.
§.§ Contributions
In this work, we develop a novel spoofing detection and mitigation framework for LiDAR-GPS sensor fusion. This problem has received little attention in prior literature, and to the best of our knowledge, our solution is the first to examine the problem in the context of Chimera signal enhancement.
When combined with Chimera authentications, our approach mitigates the localization error induced by a spoofer over the Chimera epoch, which we experimentally validate using real LiDAR and simulated GPS measurements.
The key contributions of this work are:
* We perform tightly-coupled factor graph optimization with LiDAR odometry and GPS pseudoranges for accurate vehicle localization within the Chimera epoch.
* During the Chimera epoch, we use a chi-squared detector to determine the authenticity of GPS measurements. When measurements are deemed unauthentic, we mitigate the effects of the spoofing attack by relying on LiDAR odometry and excluding GPS.
* We validate our approach experimentally for the 3-minute Chimera slow channel, using real-world LiDAR measurements from the KITTI dataset and simulated GPS measurements. During nominal conditions, our approach maintains accuracy comparable to baseline methods. During spoofed conditions, our approach demonstrates consistent detection and mitigation of the attack across various trajectories and spoofing attacks.
To the best of our knowledge, we believe this is the first spoofing detection and mitigation approach for tightly-coupled GPS factor graph optimization.
§.§ Paper Organization
The remainder of this paper is organized as follows. Section <ref> surveys relevant literature to this work. Section <ref> introduces the problem statement and notation, and provides background on pose representation and factor graph optimization. Section <ref> details our factor graph optimization and spoofing detection and mitigation framework. Section <ref> describes the setup and parameters for experimental validation, Section <ref> presents the experimental results, and Section <ref> concludes this paper.
§ RELATED WORK
Our work bridges the areas of LiDAR-GPS sensor fusion and spoofing detection and mitigation in the context of Chimera GPS. In this section, we discuss existing approaches for LiDAR-GPS sensor fusion, followed by prior works addressing spoofing detection and resilient estimation.
§.§ LiDAR-GPS Sensor Fusion
LiDAR-GPS sensor fusion approaches can be broadly separated in two main categories: filtering-based and graph-based.
Filtering-based approaches rely on recursive Bayesian estimation as the underlying state estimation and fusion framework.
The most notable examples of Bayesian filters are the Kalman filter (KF), Extended Kalman Filter (EKF), and the Particle Filter (PF).
Several current state-of-the art LiDAR-GPS sensor fusion approaches rely on filtering <cit.>.
Graph-based approaches encode vehicle states and sensor observations into a graph data structure, and employ graph optimization to solve for the optimal trajectory.
Over the past decade, there has been continually growing interest in factor graph optimization (FGO) for sensor fusion localization.
Recently, Wen et al. <cit.> compared EKF and FGO localization approaches in a GPS challenged environment, and found that the FGO outperformed the EKF for the tightly-coupled case, in which GPS pseudoranges are incorporated directly into the graph.
The authors also showed that tightly-coupled FGO outperforms the loosely-coupled alternative, in which GPS position measurements are used as factors rather than pseudoranges.
Successful graph-based integrations of LiDAR and GPS have also been explored.
Chen et al. <cit.> present a Bayesian graph for fusion of LiDAR, GPS, and 3D building maps in order to localize a UAV in an urban environment.
The authors demonstrate significant improvement over a GPS-only Kalman filter approach, but the method relies on map matching with existing 3D building models to achieve accurate localization.
He et al. <cit.> also leverage graph optimization to fuse LiDAR, IMU and GPS.
The authors evaluate their method on the KITTI dataset <cit.>, outperforming state-of-the-art LiDAR odometry approaches with meter-level accuracy, while also demonstrating their algorithm can be run in real-time at low latency.
§.§ Spoofing Detection and Resilient State Estimation
However, none of the above LiDAR-GPS sensor fusion works address the vulnerability of GPS to spoofing attacks.
In 2020, Shen et al. <cit.> demonstrated a spoofing method which is able to exploit the sensor fusion algorithm of <cit.>, and induce large lateral deviations to the vehicle's state estimate, and consequently to the actual trajectory, during periods of low confidence.
With just 2 minutes of attack time, the spoofing algorithm is able to induce dangerous vehicle behavior with a 97% success rate.
Outside of sensor fusion, GPS spoofing attack methods and detection strategies have received much attention <cit.>.
However, many detection strategies make assumptions about receiver capabilities or require additional functionality, such as multiple antennas.
Chimera is the first proposed authentication service for GPS signals and is set to be tested onboard the NTS-3 (Navigation Technology Satellite 3) platform scheduled for launch in 2023 <cit.>.
Very recently, some works have begun to address the problem of spoofing-resilient GPS sensor fusion with Chimera.
Mina et al. <cit.> present a spoofing-resilient filter for continuous state estimation between Chimera authentications, which leverages IMU and wheel encoders as self-contained sensors to determine the trustworthiness of received GPS signals.
Kanhere et al. <cit.> use FGO to combine GPS, IMU and wheel odometry to perform spoofing mitigation with Chimera authentication.
The authors model the authentication state as switchable constraints <cit.> in the graph, and test their method on simulated trajectories using the fast channel authentication period of 6 seconds.
Our approach extends upon these prior works, integrating elements of the chi-squared detection scheme from prior works <cit.> into a graph formulation.
Furthermore, while the factor graph approach of <cit.> only performs implicit mitigation, and is only evaluated for fast-channel application on a short (12 s) trajectory, our approach performs explicit detection and mitigation and is evaluated on trajectories spanning the slow-channel Chimera epoch of 3 minutes.
Additionally, our approach uses a tightly-coupled factor graph with GPS pseudorange factors, which has been found to outperform the loosely-coupled version in localization accuracy in prior work <cit.>.
Finally, we incorporate LiDAR as a new sensor in the realm of sensor fusion under Chimera.
§ PRELIMINARIES
In this section, we present our problem statement and objective, discuss notation and models used in the paper, and cover relevant background on Lie groups and factor graph optimization.
§.§ Problem Statement
We consider a vehicle equipped with a GPS receiver and LiDAR moving through an environment with continuous GPS availability.
During operation, the vehicle may be subject to GPS spoofing attacks which induce arbitrary bias error to the GPS measurements.
However, the GPS receiver has access to slow channel Chimera authentication every 3 minutes.
Within the 3-minute Chimera epoch, our objective is to perform spoofing-resilient localization.
In particular, we wish to determine when to leverage the available, but not-yet-authenticated GPS measurements and when to fall back on the LiDAR measurements only.
In this way, we seek to improve localization performance when GPS is likely authentic, while remaining resilient to an experienced spoofing attack.
§.§ Notation
We model time as discrete, with Δ t denoting the discretization interval in seconds. The variable k is used to denote the current time index, while the variable i is used to denote an arbitrary time index.
At time k, if a LiDAR measurement is available, we obtain a point cloud P_k ∈^N_points× 3 where N_points is the number of points in the point cloud.
Likewise, if GPS is available at time k, we obtain a set of pseudorange measurements ρ_k = (ρ_k^(1), …, ρ_k^(m)) ∈^m, where ρ_k^(j) is the measured pseudorange to satellite j, and m is the number of visible satellites.
Recall that the time duration between successive authentications is referred to as the Chimera epoch.
The length of the Chimera epoch for the slow channel, in discretized timesteps, is then N_epoch 180/Δ t.
refers to the identity matrix, and to the matrix of zeros.
Scalars are denoted with lowercase italics, vectors with lowercase boldface, and matrices with uppercase boldface.
§.§ GPS Pseudorange Error Model
We model the distribution of authentic, i.e. unspoofed, GPS pseudorange error as a zero-mean Gaussian (0, σ), where σ is the standard deviation of the pseudorange error.
Thus we can write
ρ_k = ρ̅_k + ϵ_k, ϵ_k ∼(0, σ),
where ρ_k is the measured range and ρ̅_k is the true range.
We assume that clock bias effects have been removed from the measurements.
§.§ (3) and (3) Lie groups
We present a brief background on the rotation and rigid body transformation Lie groups (3) and (3), as we use the corresponding 3D representations for the vehicle's 3D pose.
More extensive coverage of these topics can be found in <cit.>.
The vehicle's 3D pose at time k is denoted as _k ∈(3), where (3) is the Special Euclidean group of dimension 3.
A pose in (3) consists of a rotation R∈(3), where (3) is the Special Orthogonal group of dimension 3, and a translation t∈^3.
(3) is defined as
(3) = {R∈^3× 3 | RR = , R = 1},
i.e. the set of all rotation matrices (orthogonal matrices with determinant 1).
(3) can then be represented as the Cartesian product of (3) with ^3, i.e., (3) ∼(3) ×^3.
We can represent _k with a transformation matrix
T_k = [ R_k t_k; 1 ]∈^4× 4,
where R_k ∈^3× 3 is a rotation matrix representing orientation in the global frame, and t_k ∈^3 is a translation vector representing position in the global frame.
The Lie groups (3) and (3) have associated Lie algebras denoted (3) and (3), with dimensionality 3 and 6 respectively.
The Lie algebra can be thought of as the tangent space (linearization) of the manifold at the identity element, and are linear spaces upon which optimization may be done conveniently.
More precisely, there is an isomorphism from (3) to ^3, and from (3) to ^6, and any R∈(3) can be represented with a vector ω∈^3, and similarly any T∈(3) can be represented with a vector ν∈^6.
The exponential map exp: (3) ↦(3) maps from the tangent space (3) to (3) (from ν to T), while the logarithmic map log: (3) ↦(3) maps from (3) to its tangent space (3) (from T to ν).
Details on how the exponential and logarithmic map are computed can be found in <cit.>.
For x, y∈(3), the “ominus" operator is defined as y⊖x = log(x^-1y) ∈(3) <cit.>.
This operator allows us to compute the “difference" of poses in (3) in linearized tangent space coordinates, and will be used later in defining the LiDAR odometry residual for our factor graph.
§.§ Factor Graph Optimization
Our state estimation framework relies on factor graph optimization (FGO), in which a graph encoding vehicle poses and sensor measurements is optimized to determine the estimated trajectory.
In this section, we provide a brief background on general factor graph formulation.
More details can be found in <cit.> and <cit.>.
A factor graph consists of a set of nodes = {_1,…,_N} which represent poses or states, and a set of edges or factors which represent sensor measurements which constrain the graph.
A sensor observation linking nodes _i and _j is denoted _i,j with associated information matrix Ω_i,j, which is defined as the squared inverse of the measurement covariance matrix: Ω_i,jΣ_i,j^-1.
Each sensor has an associated measurement model _i,j(_i, _j), which is used to define a residual _i,j(_i, _j) = _i,j - _i,j(_i, _j) for each factor.
Optimizing the factor graph consists of minimizing the following objective
F() = ∑_(i,j)∈_ijΩ_ij_ij
which is the sum of information-normalized squared error of the residuals.
This objective represents the negative log-likelihood of the vehicle poses given the sensor measurements.
Thus, solving the optimization problem
^* = _F().
yields the optimal set of poses ^* given our measurements.
The optimization is done by linearizing 𝐅 and iteratively solving for updates to the state .
For each edge (i,j)∈, the gradient b_ij and Hessian H_ij are computed as
b_ij = _ijΩ_ijJ_ij,
H_ij = J_ijΩ_ijJ_ij,
where J_ij is the Jacobian of _ij().
The individual gradients and Hessians are then accumulated to form the gradient and Hessian for the entire graph, b = ∑b_ij and H = ∑H_ij, and the linear system
HΔ^* = -b
is solved with sparse Cholesky factorization to find the optimal update Δ^*, which is applied to the state + Δ^*.
§ APPROACH
We now describe the details of our spoofing-resilient LiDAR-GPS factor graph approach.
Fig. <ref> shows a high-level block diagram of the framework.
§.§ LiDAR-GPS Factor Graph
Our approach revolves around maintaining a tightly-coupled factor graph which integrates LiDAR and GPS for both localization and spoofing detection and mitigation, the structure of which is shown in Fig. <ref>.
The nodes of our factor graph are vehicle poses _i ∈(3) as described in Section <ref>.
The measurement models (Equations <ref> and <ref>) and residuals (Equations <ref> and <ref>) for GPS pseudoranges and LiDAR odometry factors are detailed in the following subsections.
§.§.§ GPS Pseudorange Factors
Given pose _i with position component t_i and satellite j at position _i^(j) at time i, the expected GPS pseudorange measurement from satellite j to node i is
ρ̂_i^(j)(_i) = t_i - _i^(j)_2.
Now, given received pseudorange measurement ρ_i^(j), the GPS residual function can be defined as
_i,j(_i) = ρ_i^(j) - ρ̂_i^(j)(_i).
As the residual is a scalar in this case, the information matrix is also a scalar, (σ)^-2, where σ is the standard deviation of authentic GPS pseudoranges from Section <ref>.
In this work, we do not consider the effect of clock bias states in our pseudorange factors and simulated pseudorange measurements, but we will incorporate this into our measurement model in our future works.
§.§.§ LiDAR Odometry Factors
For LiDAR, we first use point-to-plane ICP (Iterative Closest Point) <cit.> to register successive point clouds and produce an odometry measurement.
Given point clouds P_i and P_i+1 at times i and i+1, ICP produces an odometry measurement T_i^i+1:
T_i^i+1icp(P_i, P_i+1).
T_i^i+1 is a rigid body transformation in (3), and can be written as
T_i^i+1 = [ R_i^i+1 t_i^i+1; 1 ]∈^4× 4,
where R_i^i+1∈^3× 3 is the relative rotation between poses at times i and i+1 and t_i^i+1∈^3 is the relative translation between poses at times i and i+1.
The LiDAR odometry measurement model is the expected transformation between poses i and i+1:
T̂_i,i+1(_i, _i+1) = _i+1 (_i)^-1.
The LiDAR odometry residual function is then defined as
_i,i+1(_i,_i+1) = T_i^i+1⊖T̂_i,i+1,
where ⊖ is the “ominus" operator defined in Section <ref>.
Intuitively, this residual is the difference between the expected and measured odometry transformation in (3) tangent space coordinates.
Thus, _i,i+1(_i,_i+1) ∈^6, and the information matrix for LiDAR odometry measurements is denoted by Ω∈^6× 6.
§.§.§ Optimization
We optimize the graph in a sliding window fashion.
The window size is denoted by N, and as new nodes and measurements are added to the graph, previous nodes and edges are removed in order to maintain the maximum number of nodes in the graph as N.
The objective for our factor graph over a single window can thus be written as
F() =
∑_i=1^N∑_j=1^m (_i,j) (σ)^-2_i,j
+ ∑_i=1^N-1 (_i,i+1)Ω_i,i+1
The optimization carried out as described in Section <ref>.
§.§ Spoofing Detection and Mitigation
To perform detection between Chimera authentication times, we design a chi-squared spoofing detector within our factor graph framework.
Our detector computes a test statistic q_k at time k based on the information-normalized residuals of the GPS factors over the current window:
q_k = ∑_i=i^N ∑_j=1^m (_i,j) (σ)^-2_i,j
Note that we do not include a normalization term based on the state estimate uncertainty (as typically done in the chi-squared detector with Kalman filter) as the factor graph does not maintain an estimate of state uncertainty.
We then compare q_k with a threshold τ, which is pre-computed based on user-specified false alarm requirements.
If q_k > τ, then it is determined that spoofing is present in the measurements, otherwise the measurements are deemed authentic.
We now derive the computation of the threshold τ.
When the received GPS measurements are authentic and follow the nominal distribution with zero-mean Gaussian noise as shown in Equation (<ref>), the GPS residuals _i,j are distributed according to (0, σ) (assuming the estimated positions from FGO are close to ground-truth).
Then, since the test statistic q_k is computed from squaring the residuals (of which there are Nm) and normalizing by (σ)^-2, q_k follows a central chi-squared distribution with n = Nm degrees of freedom.
Thus, given a desired false alarm rate α to remain under, i.e., (detection | not spoofed) ≤α, we desire (q_k ≤τ) = 1 - α for nominal conditions.
Therefore, we compute τ as
τ = Φ^-1(1-α; n=Nm)
where Φ^-1 is the inverse cumulative distribution function (CDF) of the chi-squared distribution with n=Nm degrees of freedom.
If spoofing is detected at time k, i.e., q_k > τ, then any future GPS measurements are deemed unauthentic and the FGO henceforth proceeds with LiDAR only.
Additionally, the current window is also re-processed with LiDAR measurements only, and GPS measurements discarded.
§.§ Chimera Authentication
After N_epoch timesteps have passed, we receive Chimera authentication, which indicates whether the GPS measurements in the past Chimera epoch are authentic or unauthentic.
If the Chimera authentication determines the GPS measurements to be authentic, we leverage this information and rely on the received measurements within our factor graph for N measurements, which corresponds to the window size of our factor graph.
However, if authentication fails, then we perform the same mitigation steps outlined in Section <ref>, where we discard GPS measurements and rely only on the LiDAR sensor.
At this point, the spoofed victim could discontinue nominal operations and proceed according a fail-safe protocol, such as safely pulling over to the side of road, the specifics of which are outside the scope of this work.
§ EXPERIMENTS
We now describe details of our experimental validation, including the dataset used, spoofing attacks considered, baselines which we compare to, and parameters choices for our implementation.
§.§ KITTI Dataset
We evaluate our approach using LiDAR data from the KITTI dataset <cit.> and simulated GPS pseudorange measurements based on the ground-truth positions and satellite ephemeris.
In order to test our algorithm's ability to detect and mitigate spoofing attacks over a 3-minute slow-channel Chimera epoch, we select all sequences from the raw data recordings of duration 3 minutes or longer.
Table <ref> lists the four sequences used in our experiments, their total duration in seconds, and the abbreviations used to refer to each one throughout the remainder of the paper.
For all sequences, we consider a 200 second segment of the trajectory, which contains a full 180 second Chimera slow channel epoch.
We simulate the Chimera authentication as occurring successfully at the first time step of the trajectory.
As a result, the second Chimera authentication time occurs 180 seconds into the trajectory, and we simulate this as a failed authentication for each of the simulated GPS spoofing test scenarios described in Section <ref>.
For all tested trajectories, we use the synced and rectified data in order to handle LiDAR motion distortion effects.
To simulate GPS pseudorange measurements, we compute GPS satellite positions over time _i^(j) using ephemeris data pulled from the location and timestamps of each sequence and follow the measurement model outlined in Section <ref>.
For ground-truth reference trajectories, we use the OXTS ground-truth positions and orientations provided by KITTI.
§.§ Spoofing Attacks
We simulate spoofing attacks on the vehicle by generating a spoofed reference trajectory with added biases, then computing spoofed GPS pseudoranges based on the spoofed reference trajectory.
Specifically, we consider a ramping attack that begins between the Chimera authentications, in which the spoofer introduces a bias which starts small and steadily ramps up to a large error.
This type of attack is typically the most difficult to detect, as the spoofer can gradually induce error without any sudden jumps to alert a standard RAIM solution.
And although the bias may start small, a spoofing victim under this attack can still experience significant positioning error over a sufficient time window, such as the 3 minute slow channel Chimera epoch.
For our experiments, we use a ramping bias which starts at 0 m, and begins linearly increasing at rate r m/s from time T onward.
We add the spoofing bias to the ENU positive x (East) direction, and choose T = 100 s for a total spoofing duration of 100 seconds.
We run experiments for ramp rates of r = 0.5 m/s, 1.0 m/s, and 2.0 m/s, for maximum bias of 50 m, 100 m, and 200 m respectively.
Fig. <ref> shows the reference trajectories for each sequence, and spoofed trajectories for the chosen ramp rates.
§.§ Metrics and Baselines for Comparison
In our experiments we compare our approach with two baseline approaches.
The first baseline is “Odometry only," in which only LiDAR odometry is used to localize the vehicle between Chimera authentications, and GPS measurements are only used at the slow channel 3 minute interval.
The second baseline is “Naive FGO," in which LiDAR-GPS factor graph optimization produces a fused state estimate but no spoofing detection or mitigation is employed.
Finally, “SR FGO" refers to our spoofing-resilient LiDAR-GPS factor graph optimization approach presented in Section <ref>.
For characterizing performance, we consider L^2 norm position error, which is calculated as
e_k = t_k - t_k_2 for each time index k of the trajectory, where t_k is the reference trajectory position at time k and t_k is the estimated trajectory position at time k.
In addition, we consider two metrics: mean L^2 norm position error and maximum L^2 norm position error, which are simply computed as
e = mean_k(e_k)
and
e_max = max_k(e_k)
respectively, and hereafter referred to more concisely as mean error and max error.
§.§ Parameters
We discretize time with Δ t = 0.1 s, and use LiDAR point clouds from the KITTI sequences taken at 10 Hz.
We use Σ = diag(0.01, 0.01, 0.01, 0.05, 0.05, 0.05) as the standard deviation of LiDAR ICP odometry measurements, where the 0.01 values correspond to the rotational components of (3) (equivalent to 0.01 rad std.) and the 0.05 values correspond to the translational components (0.05 m std.).
We simulate GPS measurements at 1 Hz and take σ = 7.0 m according to the typical User-equivalent Range Error (UERE) for a single-frequency receiver <cit.>.
We choose a window size of N = 100, and shift the window by 10 steps per iteration.
α=0.001 is chosen as the false alarm rate for our detector.
For point-to-plane ICP LiDAR registration, we use the Open3D <cit.> function with parameter class and threshold value of 1.0.
We use SymForce <cit.>, a recently developed state-of-the-art symbolic computation library for robotics applications, as the factor graph optimization backend for our method.
Our code is available online at our GitHub repository[<https://github.com/Stanford-NavLab/chimera_fgo>].
§ RESULTS
Now we present the experimental validation results of our spoofing-resilient factor graph algorithm.
We run our algorithm along with the two baselines (Section <ref>) on four KITTI sequences (Table <ref>), for both nominal GPS measurements and various ramping GPS spoofing attacks (shown in Fig. <ref>).
We compare performance in terms of L^2 norm error over time, as well as mean and max L^2 norm error, and also include case studies on window size variation and detection statistics.
§.§ Comprehensive Comparison
We begin by presenting a comprehensive comparison of our approach against the two baselines considered, across the different KITTI sequences and multiple spoofing ramp rates.
These results are illustrated in Fig. <ref>.
We see that for all sequences, the mean and max L^2 norm error of our SR FGO approach remains under that of LiDAR odometry only.
In particular, as the spoofing attack rate increases, the Naive FGO mean and max errors increase, and in some cases eventually exceed the levels of odometry drift, whereas the SR FGO errors are successfully mitigated in all cases and remain bounded under odometry.
§.§ Comparison of Errors over Time under Spoofed Conditions
Next, we focus on the spoofed case of r = 2.0 m/s, and compare the L^2 position error over time of our SR FGO approach against the two baselines Odometry only and Naive FGO.
Fig. <ref> shows a comparison plot for each KITTI trajectory.
In each plot, 20 Monte Carlo runs of Naive and SR FGO are shown, and the start of the spoofing attack at 100 seconds is indicated by the vertical red dashed line.
For each trajectory, we see that LiDAR odometry suffers from significant drift over time, on the order of 100s to 200s of meters of final L^2 norm position error after 200 seconds.
For both Naive FGO and SR FGO, L^2 position errors remain under 5.0 m for the first 100 seconds during authentic conditions.
However, after the start of the attack at 100 seconds, the Naive FGO approach is heavily influenced by the spoofing attack, and its L^2 norm position error diverges, with final error exceeding that of LiDAR odometry in sequences 0018 and 0028.
On the other hand, our SR FGO approach is able to consistently detect and mitigate the spoofing attack, and keep position errors bounded to under odometry drift levels.
§.§ Window Size Comparison
Now, we perform a case study to analyze to effect of varying window size on the performance of our algorithm.
Table <ref> shows a comparison of mean and max L^2 norm position error as well as average iteration time across a range of increasing window sizes for each sequence.
As expected, for all sequences, the average iteration time increases with window size, as the factor graph optimization must be done over a larger window with more measurements.
We also observe high rate of false detection for the smallest window size of 20.
This is to be expected, as the test statistic will be more sensitive to measurement noise and small errors in the trajectory estimates for a smaller window, and thus this window size behaves similarly to LiDAR odometry only in performance.
For all sequences, we also notice that improvement in mean and max error saturates as we increase the window size, occurring at N = 50 for sequence 0018, N = 100 for sequence 0027, N = 200 for sequence 0028, and N = 100 for sequence 0034.
This is most likely due to that fact that, as window size increases, a larger window of the trajectory is re-processed when spoofing is detected.
If spoofing is detected for a window with majority authentic measurements but some spoofed measurements towards the end, then we may discard more authentic measurements which may adversely affect the overall positioning performance.
The results of this case study validate our general choice of window size N = 100 for our experiments.
§.§ Detection Statistics
Finally, we examine the detection statistics for our algorithm, first running 100 Monte Carlo simulations for the nominal, unspoofed case to test the false alarm rate of our detector. These runs are plotted along with the corresponding detection threshold in Fig. <ref>.
The parameter α = 0.001 corresponds to the false alarm probability of a single trial, so across a 180 second long Chimera epoch with 180 trials (one for every GPS measurement at 1 Hz), the probability of a false alarm occurring during the Chimera epoch is 1 - (1 - α)^180 = 0.165.
Across the 100 Monte Carlo runs, there were a total of 9 false alarms, for an empirical per run false alarm rate of 0.09, and 15 total individual trial false alarms out of 18000 individual trials for an empirical per trial false alarm rate of 0.000833.
Thus, we see that our detector satisfies the desired false alarm rate requirements set by the user.
We also perform Monte Carlo simulation for the spoofed case for an attack with r = 1.0 m/s, shown in Fig. <ref>.
In each of the 10 runs, the test statistic crosses the threshold shortly after the start of the attack, successfully detecting it, with a average time to detect of 11.2 seconds.
§ CONCLUSION
In this work, we present a new framework for spoofing-resilient LiDAR-GPS factor graph fusion for Chimera GPS, which provides continuous and secure state estimation between Chimera authentication times.
Our approach fuses LiDAR and GPS measurements with factor graph optimization, and computes a test statistic for spoofing detection based on the GPS factor residuals.
From this test statistic, our approach determines when to leverage the unauthenticated GPS measurements during the Chimera epoch, in order to improve localization performance when GPS is likely authentic.
We evaluate our approach with real-world data from the KITTI self-driving dataset, using sequences which span the Chimera slow channel 3-minute epoch.
Our results demonstrate rapid detection and effective mitigation of spoofing attacks during vulnerable periods between authentications.
This work contributes towards the problem of designing LiDAR-GPS factor graph localization that is robust to GPS spoofing attacks. Our approach is designed around the Chimera signal enhancement, which will be a critical utility to authenticating GPS measurements against spoofing. Between Chimera authentications, we utilize the LiDAR sensor measurements to validate and strategically leverage GPS measurements to improve localization performance during authentic conditions, while maintaining resilience against experienced attacks during spoofing. Our work addresses the research gap for LiDAR-GPS fusion platforms, and takes an important step towards ensuring continuous navigation security for users of the future Chimera-enhanced GPS.
§ ACKNOWLEDGMENT
This material is based upon work supported by the Air Force Research Lab (AFRL) under grant number FA9453-20-1-0002.
We would like to thank the AFRL for their support of this research.
We would also like to thank Shubh Gupta for reviewing this paper.
IEEEtran
|
http://arxiv.org/abs/2307.03972v1 | 20230708131059 | Evaluating the Capability of Large-scale Language Models on Chinese Grammatical Error Correction Task | [
"Fanyi Qu",
"Yunfang Wu"
] | cs.CL | [
"cs.CL"
] |
Autonomy 2.0: The Quest for Economies of Scale
Shuang Wu, Bo Yu, Shaoshan Liu, Yuhao Zhu
August 12, 2023
==============================================
Large-scale language models (LLMs) has shown remarkable capability in various of Natural Language Processing (NLP) tasks and attracted lots of attention recently. However, some studies indicated that large language models fail to achieve promising result beyond the state-of-the-art models in English grammatical error correction (GEC) tasks. In this report, we aim to explore the how large language models perform on Chinese grammatical error correction tasks and provide guidance for future work. We conduct experiments with 3 different LLMs of different model scale on 4 Chinese GEC dataset. Our experimental results indicate that the performances of LLMs on automatic evaluation metrics (e.g. F_0.5 score) falls short of the previous sota models because of the problem of over-correction. Furthermore, we also discover notable variations in the performance of LLMs when evaluated on different data distributions. Our findings demonstrates that further investigation is required for the application of LLMs on Chinese GEC task.
§ INTRODUCTION
Building on InstructGPT <cit.>, ChatGPT has demonstrated its powerful ability to understand complex instruction and generate reasonable responses on various of NLP tasks. Following the technical trajectory of ChatGPT, a significant number of high-quality LLMs have emerged in recent times in both academia and industry, such as LLaMA <cit.>, ChatGLM <cit.> and PaLM <cit.>. Previous studies found that these LLMs have achieved great performance on a wide range of NLP tasks, including machine translation <cit.>, named entity recognition <cit.> and text summarization <cit.>.
Certain studies have token comprehensive investigations into the performance of LLMs in the domain of English grammatical error correction, yielding some interesting findings <cit.>. LLMs are not able to outperform sota models in terms of automatic evaluation metrics. This is primarily because LLMs tend to make unnecessary modifications to make the input sentences more fluent, which may result in over correction problem, and in some cases, even alter the original semantics of the input sentences.
In this report, we aim to explore the performance of LLMs in Chinese GEC task. We conducted experiments on various LLMs to investigate the influence of model size on the GEC results. Additionally, we attempted different test dataset from various data sources to explore the impact of data distribution on the outcomes.
§ EXPERIMENTAL SETUP
§.§ Dataset
We conduct experiments on four Chinese GEC dataset to provide a comprehension demonstration of LLMs' capability. The detailed statistics of these dataset are shown in Table <ref>.
§.§.§ GEC data from Chinese learners
We apply the test set of NLPCC-2018<cit.> and the validation set of MuCGEC<cit.> for evaluation. These two dataset collect the grammar errors made by foreigners during their process of learning Chinese.
§.§.§ GEC data from Chinese native speaker examination
We apply the validation set of FCGEC<cit.> and the validation set of NaCGEC<cit.> for evaluation. These two dataset are collected from Chinese native speakers' language examinations.
§.§ Model
We conduct experiments on 3 LLMs with different model scales:
* ChatGPT[https://platform.openai.com/docs/api-reference]: we evaluate the performance of ChatGPT with OpenAI's official API. We choose gpt-3.5-turbo as the evaluated model, which stands out as the most advanced and specifically optimized for chat functionality.
* ChatGLM-6B <cit.>: ChatGLM is an open bilingual language model based on GLM framework which is optimized for Chinese QA and dialogue and exhibits a robust capacity for Chinese understanding.
* LLaMA-7B <cit.>: LLaMA is a collection of foundation LLMs ranging from 7B to 65B parameters proposed by Meta AI. we applied the 7B model for evaluation.
§.§ Evaluation Metric
We evaluate models' performance with Precision, Recall and F_0.5 from word level and char level respectively.
We adopt the official implementation of MaxMatch (M^2) <cit.> scorer to calculate word-level F_0.5 score and choose PKUNLP as our word segment tool. We apply ChERRANT [https://github.com/HillZhang1999/MuCGEC/tree/main/
scorers/ChERRANT] for char-level metric calculation.
§.§ Prompt
Considering the differences in performance of large language models, we designed different prompts for them. These prompts are roughly the same in semantics, but there are some differences in details. The prompts are shown in Figure <ref>
§.§ Setting details
We set temperature to 0.6 when applying ChatGPT for a reliable generated result. For ChatGLM-6B and LLaMA-7B, we conduct experiments on 4 GeForce NVIDIA 3080 Ti GPUs.
§ EXPERIMENT RESULTS
The experiment results are shown in Table <ref>. There are some results worthy of discussion.
First, different data sources result in distinct evaluation results. LLMs exhibit significantly superior performance when trained on Chinese learner data (NLPCC and MuCGEC), as opposed to Chinese native speaker examination data (FCGEC and NaCGEC). According to our observations, the grammatical errors made by Chinese learners primarily involve the misuse of similar words or phrases, rather than incorrect sentence structures. In contrast, GEC data from Chinese native speaker examination maintains a higher level of regularity and is consisted of more complex structural errors.
It is noteworthy that there exists gaps between GEC data from Chinese examination and Chinese native speakers' daily spoken habits.
Second, different model scales also lead to distinct performance. The unified trend is that ChatGPT performs similarly on Precision with other 2 smaller models while achieves significant improvement in Recall. This implies that the evaluated LLMs have similar error correction capability while their error detection ability differs a lot.
Third, there still exists great gaps between state-of-art models and LLMs on automated evaluation metrics. Previous work <cit.> has found the problem of over-correction for LLMs, which has also been noticed in our experiment.
What's more, it is hard to explain why the char-level evaluation metrics is significantly lower than word-level evaluation metrics, which is not noticed in previous work.
§ CONCLUSION
In this report, we explore the performance of various LLMs on Chinese grammatical error correction task. Experimental results indicate that there still remains gap between LLMs' performance and current sota models. Furthermore, the performance of different LLMs' performance is greatly impacted by the distribution of test data. Future work can focus on addressing the over-correction problem of LLMs and explore the untapped potential of LLMs in the field of grammatical error correction tasks.
acl_natbib
|
http://arxiv.org/abs/2307.04356v1 | 20230710054920 | Reducing Information Loss for Spiking Neural Networks | [
"Yufei Guo",
"Yuanpei Chen",
"Liwen Zhang",
"Xiaode Liu",
"Xinyi Tong",
"Yuanyuan Ou",
"Xuhui Huang",
"Zhe Ma"
] | cs.NE | [
"cs.NE",
"cs.CV"
] |
headings
88
ECCV-22 submission ID
ECCV-22 submission ID
Paper ID
Reducing Information Loss for SNNs
Guo, Y. et al.
Intelligent Science & Technology Academy of CASIC, Beijing 100854, China
Chongqing University, Chongqing, 400044, China
[email protected], [email protected], [email protected]
Reducing Information Loss for Spiking Neural Networks
Yufei Guo1Equal contribution. Yuanpei Chen1^⋆ Liwen Zhang1 YingLei Wang1 Xiaode Liu1 Xinyi Tong1 Yuanyuan Ou2 Xuhui Huang1 Zhe Ma1
August 12, 2023
======================================================================================================================================
The Spiking Neural Network (SNN) has attracted more and more attention recently. It adopts binary spike signals to transmit information. Benefitting from the information passing paradigm of SNNs, the multiplications of activations and weights can be replaced by additions, which are more energy-efficient. However, its “Hard Reset" mechanism for the firing activity would ignore the difference among membrane potentials when the membrane potential is above the firing threshold, causing information loss. Meanwhile, quantifying the membrane potential to 0/1 spikes at the firing instants will inevitably introduce the quantization error thus bringing about information loss too. To address these problems, we propose to use the “Soft Reset" mechanism for the supervised training-based SNNs, which will drive the membrane potential to a dynamic reset potential according to its magnitude, and Membrane Potential Rectifier (MPR) to reduce the quantization error via redistributing the membrane potential to a range close to the spikes. Results show that the SNNs with the “Soft Reset" mechanism and MPR outperform their vanilla counterparts on both static and dynamic datasets.
§ INTRODUCTION
Deep Neural Networks (DNNs) have greatly improved many applications in computational vision, , object detection and recognition <cit.>, object segmentation <cit.>, object tracking <cit.>, etc. In pursuit of models with better performance, more and more complex networks are proposed. However, the increasing complexity poses a new challenge to model deployment on power-constrained devices, thus becoming an impediment to the applications of these advanced complex models. There have been several approaches to address this problem, such as quantization <cit.>, pruning <cit.>, knowledge distillation <cit.>, spiking neural networks (SNNs) <cit.>, and so on. Among these approaches, the biology-inspired method, SNNs provide a unique way to reduce energy consumption by mimicking the spiking nature of brain neurons. A spiking neuron integrates the inputs over time and fires a spike output whenever the membrane potential exceeds the firing threshold. And using 0/1 spike to transmit information makes SNNs enjoy the advantage of multiplication-free inference by converting multiplication to additions. Furthermore, SNNs are energy-efficient on neuromorphic hardwares, such as SpiNNaker <cit.>, TrueNorth <cit.>, Darwin <cit.>, Tianjic <cit.>, and Loihi <cit.>.
Despite the attractive benefits, there is still a huge performance gap between existing SNN models and their DNN counterparts. We argue that the reason for the low accuracy is there exists information loss in SNNs. First, the information processing of neurons in supervised training-based SNNs are generally following the rules of the Integrate-and-Fire (IF) model or Leaky IF (LIF) model, where once a membrane potential exceeds the firing threshold, a “Hard Reset” operation will force the “residual” potential to be set to 0, , once fired, all the information will be taken away. Obviously, this mechanism of “residual” membrane potential-ignored reset mode would fail to preserve the diversity of various membrane potentials. Hence the information encoding capacity of the network is compromised, such that the risk of information loss increases accordingly. Second, although the 0/1 spike information processing paradigm enables SNNs to enjoy the advantage of high efficiency, quantifying the real-valued membrane potential to 0/1 spikes will inevitably introduce the quantization error, which also brings about information loss.
To address the information loss problem, we propose a “Soft Reset”-based IF (SRIF) neuron model that retains the “residual” membrane potential from subtracting its spike value at the firing instants. Hence the diversity of the membrane potentials that exceed the firing threshold will be preserved. Though “Soft Reset” is commonly used in converting methods from ANN to SNN (ANN2SNN) <cit.> methods,
rarely applied in supervised SNNs <cit.>, and has not been discussed in SNN enhancement from the perspective of information loss reducing. In addition, for alleviating quantization error, the Membrane Potential Rectifier (MPR) is proposed, which is performed before the firing activity to adjust the membrane potentials towards the spike values (, 0/1). With MPR, the membrane potential will be decoupled as an original one and a modulated one. The original one can keep the mechanism of a neuron and the modulated one enjoys less quantization error than the original one without suffering from any negative effects. The difference between our neuron and the vanilla neuron is illustrated in Fig. <ref>. Our main contributions are as follows:
* We propose using the SRIF model for supervised training-based SNNs. By retaining the “residual” membrane potential, SRIF enables the networks to distinguish the differences among those membrane potentials that exceed the firing threshold via subtracting their spike values thus enhancing the information encoding capacity of supervised training-based SNNs.
* We present MPR to mitigate the quantization error. By utilizing a non-linear function to modulate the membrane potential close to 0/1 before firing activity triggers, the gap between the potential and its corresponding 0/1 spike value is minified while maintaining the sparse spike activation mechanism of SNNs. To our best knowledge, few works have noticed the quantization error in SNNs, and a simple but effective method for addressing this problem is presented.
* Extensive experiments on both static and dynamic datasets were conducted to verify our method. Results show that the SNN trained with the proposed method is highly effective and efficient compared with other state-of-the-art SNN models, , 96.49% top-1 accuracy and 79.41% top-1 accuracy are achieved on the CIFAR-10 and CIFAR-100. These results of our models even outperform their DNN counterparts surprisingly, and it is very rare that SNNs may have a chance to surpass their DNN counterparts.
§ RELATED WORK
§.§ Learning Methods of Spiking Neural Networks
The training methods of SNNs can be divided into two categories. The first one is ANN2SNN <cit.>. ANN2SNN yields the same input-output mapping for the ANN-SNN pair via approximating the continuous activation values of an ANN using ReLU by averaging the firing rate of an SNN under the rate-coding scheme. Since the ANN has achieved great success in many fields, ANN2SNN can maintain the smallest gap with ANNs in terms of performance and can be generalized to large-scale structures. However, being restricted to rate-coding, ANN2SNN usually requires dozens or even hundreds of timesteps to obtain well-performed networks. Lots of efforts have been done to reduce the long inference time, such as weight normalization <cit.>, threshold rescaling <cit.>, soft reset <cit.>, threshold shift <cit.>, and the quantization clip-floor-shift activation function <cit.>, it is still hard to obtain high-performance SNNs with ultra-low latency.
The second one is supervised learning-based SNNs. SNNs quantize the real-valued membrane potentials into 0/1 spikes via the firing activity. Since the gradient of the firing activity function is zero almost everywhere, the gradient descent-based optimizer can not be directly used for the training of SNNs. To alleviate the optimization difficulty, the approximate gradient-based strategy is commonly used, and some related approaches had been proposed to achieve trainable SNNs with high performance. For example, by regarding the SNN as a special RNN, a training method of back-propagation through time with different kinds of surrogate gradient was proposed <cit.>. The spatio-temporal back-propagation (STBP) <cit.> method enables SNNs to be trained on the ANN programming platform, which also significantly promotes the direct training research of SNNs. Differentiable spike which can match the finite difference gradient of SNNs well was proposed in <cit.>. The temporal efficient training (TET) <cit.> method with a novel loss and a gradient descent regime that succeeds in obtaining more generalized SNNs, has also attracted much attention. In RecDis-SNN <cit.>, a new perspective to understand the difficulty of training SNNs by analyzing undesired membrane potential shifts is presented and the MPD-Loss to penalize the undesired shifts is proposed. Numerous works verify that supervised learning can greatly reduce the number of timesteps and handle dynamic datasets. It has increasingly aroused researchers’ interest in recent years. In this work, we focus on improving the performance of the supervised learning-based SNNs by repressing information loss, which is rarely mentioned in other works.
§.§ Threshold-dependent Batch Normalization
Batch Normalization (BN) is one of the most widely used normalization technologies, which is initially designed for very deep Convolutional Neural Networks (CNNs). As it only focuses on normalizing the spatial feature maps, directly applying BN to SNNs would damage the temporal characteristic of SNNs, which stand with spatio-temporal feature maps, leading to low accuracy. To address this issue, some specially-designed normalization methods for SNNs were proposed recently. Typically, to simultaneously balance neural selectivity and normalize the neuron activity, NeuNorm <cit.> was proposed. Then, a more effective normalization technique that can take good care of the firing threshold, named threshold-dependent Batch Normalization (tdBN) was further proposed in <cit.>. It can normalize the feature maps of SNNs in both spatial and temporal domains <cit.>. Specifically, let X_t ∈ℝ^B× C× H× W represent the input maps at each timestep, where t=1,…,T (B: batch size; C: channel; (H, W): spatial domain). Then for each channel c, the spatio-temporal sequence X^(c) = {X_1^(c), ⋯ ,X_T^(c)} is normalized by tdBN as follows,
X̃^(c) = λ·α V_th(X^(c)-x̅^(c))/√( mean((X^(c)-x̅^(c))^2)+ϵ) + β,
where V_th is the firing threshold, α is a network-structure-dependent hyper-parameter, ϵ is a tiny constant, λ and β are two learnable parameters, x̅^(c)= mean(X^(c)) is the mean value of X^(c), X̃^(c) is the normalized maps. In this paper, tdBN is also adopted considering its spatio-temporal normalization mechanism.
§ PRELIMINARY AND METHODOLOGY
To avoid the information loss in supervised training-based SNNs, we propose the “Soft Reset” IF (SRIF) model and Membrance Potential Rectificater (MPR).
§.§ “Soft Reset" IF Model
An SNN adopts a biology-inspired spiking neuron that accumulates inputs along the time dimension as its membrane potential and fires a spike when the potential exceeds the firing threshold. This mechanism makes it much different from its DNN counterpart. For better introducing the proposed SRIF neuron, a unified form defined by a recent work <cit.>, is given to describe the dynamics of all kinds of spiking neurons as follows,
H[t] = f(U[t-1],X[t]),
O[t] = Θ(H[t]-V_th),
U[t] = H[t](1-O[t])+V_resetO[t],
where X[t], H[t], U[t], and O[t] are the input, membrane potentials before and after the trigger of a spike, and output spike at the timestep t, respectively. V_th is the firing threshold, and is usually set to 0.5. Θ(·) is the step function defined by Θ(x) = 1 for x ≥ 0 and Θ(x) = 0 for x < 0. V_reset denotes the reset potential, which is set as 0. The function f(·) describes the neuronal dynamics of spiking neuron models, for the commonly used IF neuron and LIF neuron, f(·) can be respectively defined as follows,
H[t] = U[t-1]+X[t],
H[t] = τ U[t-1]+ X[t],
where τ denotes the membrane time constant.
Both LIF and IF neurons have some unique advantages, with decay characteristics introduced by the membrane time constant, LIF neuron behaves more biologically compared with IF neuron, while IF neuron is more efficient due to its addition-only processing manner. In terms of accuracy performance, neither of them show an overwhelming advantage, and more detailed experimental results of these two neurons are provided in Section 4. Considering the subtle gap in performance, we prefer to use LIF model due to its neurodynamic characteristic, from the perspective of brain science research. Conversely, from the perspective of computer science research, we recommend using IF model, since it is more friendly to hardwares.
However, both the IF model and LIF model might undertake a greater or lesser risk of information loss by the “Hard Reset" mechanism, , when the input membrane potentials exceed the firing threshold, the neurons will force the membrane potentials to a fixed value. Such mechanism ignores the “residual" parts of those fired membrane potentials. These “residual" parts contain the diversity of the input potentials, and we argue that a neuron model which can preserve the diversity or differences of these membrane potentials that cause the firing is more suitable.
To this end, along with the consideration of efficiency, we propose using a “Soft Reset" mechanism-based IF neuron, SRIF, which can keep the diversity of the membrane potentials by subtracting their firing spike values from themselves at the time where the threshold is exceeded. Though this similar “Soft Reset” mechanism has been widely used in ANN2SNN <cit.>, there are few works to use it in supervised learning-based SNNs <cit.>. We found
its value in this field from a new perspective to reduce information loss.
In SRIF neuron, Eq. (<ref>) is updated as
U[t] = H[t](1-O[t])+(H[t]-O[t])O[t].
It can be further simplified as
U[t] = H[t]-O[t].
It can be seen that, similar to IF neuron, SRIF is also an addition-only model, thus enjoying computational efficiency when implementing on hardwares. Fig. <ref> compares the difference between IF neuron and SRIF neuron in an intuitive way. Suppose that both models receive weighted input sequence of 1.5V_th, 1.2V_th, 1.5V_th, 0.9V_th, and 1.4V_th across 5 consecutive timesteps. Our SRIF neuron will produce three spikes by retaining the residual potentials at the firing instants as depicted in Fig. <ref>. Whereas, the IF neuron will produce four spikes.
§.§ Membrane Potential Rectificater
To further mitigate the information loss, we present a non-linear function, called MPR by reducing the quantization error. MPR aims to redistribute the membrane potential before it is operated by the step function. It only modulates the membrane potential that is presented to the step function but does not modify the value of membrane potential, which receives and accumulates spikes from other neurons. Specifically, we further distinguish the membrane potentials as the original one, H as in Eq. (<ref>) and the modulated one, Ĥ, which is the membrane potential that will be presented to the step function. In all previous works, H and Ĥ are treated as the same. While in this paper, we would like to provide a new perspective that using a decoupling function to separate H and Ĥ can be helpful. Specifically, H manages the original tasks as in other work, Ĥ derives from H with a non-linear function, φ(·), and it will be fed into the step function with a modulated form that can shrink the quantization error. With this decoupling mechanism, a neuron model can not only keep the membrane potential updating rule but also enjoy less quantization error.
Before giving the full details of the MPR, we try to formulate the quantization error first. It is clear that the quantization errors corresponding to different membrane potentials should be different. Hence, a value closer to its quantization spike, o, enjoys less quantization error. In specific, the firing threshold divides the membrane potentials into two parts, the part with smaller values is assigned to “0" spike, and the other with larger values is assigned to “1" spike. Then the quantization error depends on the margin between the membrane potential and its corresponding spike. Therefore, the quantization error can be defined as the square of the difference between the membrane potential and its corresponding quantization spike value as follows:
ℒ_q = (u-o)^2,
where u is the membrane potential and o ∈{0,1}. when u is below the firing threshold, o is 0, otherwise, 1.
Hence, the design of MPR should obey the following two principles:
* Spike-approaching: the modulated membrane potential, Ĥ should be closer to the 0/1 spikes than the original membrane potential, H. This principle ensures quantization error reduction.
* Firing-invariance: for the H less than V_th, the MPR should not produce the Ĥ greater than V_th and vice versa. This principle ensures the neuron output be consistent with or without using MPR.
Based on the above two principles, we define the MPR as the following symmetrical function:
φ (u) =
{[ -(1-u)^1/3+1, u 0,; 1/2tanh(3/2) tanh(3(u-1/2))+1/2, 0≤ u≤ 1,; (u)^1/3, u 1. ].
Fig. <ref> shows the response curve of the designed MPR function following the principles of spike-approaching and firing-invariance.
According to <cit.>, the membrane potential follows a Gaussian distribution, 𝒩(μ ; σ). Hence, to visualize the effect of the MPR, we sample 1000,00 values from a Gaussian distribution with 𝒩(1/2 ; 1), and present them to the MPR. Then the distribution of these 1000,00 MPR outputs is drawn in Fig. <ref>. It can be seen that the unimodal distribution, 𝒩(1/2 ; 1) is adjusted to a bimodal distribution which is with less quantization error since it can naturally gather the membrane potentials near “0" and “1".
Moreover, it is worth noting that, the redistributed membrane potential, Ĥ by MPR is only used for narrowing the gap between the true membrane potential, H and its quantization spike. It will not replace the original H in our SRIF neuron model. Then the complete new dynamics of the SRIF model can be described as follows,
H[t] = U[t-1]+X[t],
Ĥ[t] = φ(H[t]),
O[t] = Θ(Ĥ[t]-V_th),
U[t] = H[t]-O[t].
The detailed Feed-Forward procedure for the SRIF neuron with MPR is given in Algo.1.
§ EXPERIMENT
The proposed methods were evaluated on various static datasets (CIFAR-10 <cit.>, CIFAR-100 <cit.>, ImageNet <cit.>) and one neuromorphic dataset (CIFAR10-DVS <cit.>) with widely-used spiking archetectures including ResNet20 <cit.>, VGG16 <cit.>, ResNet18 <cit.>, ResNet19 <cit.>, and ResNet34 <cit.>.
§.§ Datasets and Settings
Datasets. The CIFAR-10(100) dataset consists of 60,000 images in 10(100) classes with 32× 32 pixels. The number of the training images is 50,000, and that of the test images is 10,000. The CIFAR10-DVS dataset is the neuromorphic version of the CIFAR-10 dataset. It is composed of 10,000 images in 10 classes, with 1000 images per class. ImageNet dataset has more than
1,250,000 training images and 50,000 test images.
Preprocessing. Data normalization is applied on all static datasets to ensure that input images have 0 mean and 1 variance. Besides, the random horizontal flipping and cropping on these datasets were conducted to avoid overfitting. For CIFAR-10, the AutoAugment <cit.> and Cutout <cit.> were used for data augmentation. For the neuromorphic dataset, since the CIFAR10-DVS dataset does not separate data into training and testing sets, we split the dataset into 9000 training images and 1000 test images similar to <cit.>. For data preprocessing and augmentation, we resized the training image frames to 48× 48 as in <cit.> and adopted random horizontal flip and random roll within 5 pixels. And the test images are just resized to 48× 48 without any additional processing.
Training setup. For all the datasets, the firing threshold V_th was set as 0.5 and V_reset as 0. For static image datasets, the images were encoded to binary spike using the first layer of the SNN, as in recent works <cit.>. This is similar to rate-coding. For the neuromorphic image dataset, we used the 0/1 spike format directly. The neuron models in the output layer accumulated the incoming inputs without generating any spike as the output like in <cit.>. For CIFAR-10(100) and CIFAR10-DVS datasets, the SGD optimizer with the momentum of 0.9 and learning rate of 0.01 with cosine decayed <cit.> to 0. All models were trained within 400 epochs with the same batch size of 128. For the ImageNet dataset, the SGD optimizer with a momentum set as 0.9 and a learning rate of 0.1 with cosine decayed <cit.> to 0. All models are trained within 320 epochs as in <cit.>. The batch size is set to 64.
§.§ Ablation Study for Different Neuron Models
We first conducted a set of ablation experiments to verify the effectiveness of the proposed SRIF model on CIFAR-10(100) using ResNet20 as the backbone under various timesteps without MPR. The results are shown in Tab. 1.
It can be seen that whether on CIFAR-10 or CIFAR-100, the SRIF neuron always obtains the best result ranging from 2 timesteps to 8 timesteps. This indicates the superiority of the SRIF neuron. On the other hand, the LIF neuron performs better than the “Hard Reset" IF neuron on CIFAR-10, while the IF neuron performs better on CIFAR-100, even though the LIF neuron is more like a biological neuron. This comparison also shows that, although SNNs are proposed to imitate the biological neural networks, for the implementation of large-scale networks, they still need to rely on computer hardwares. Hence, the characteristics of computational science should also be considered. In this respect, the SRIF neuron is more suitable for its advantage of low power consumption and capacity of reducing information loss.
§.§ Addition of MPR
Then, a set of ablation experiments for the MPR were conducted on CIFAR-10(100) using ResNet20 and ResNet19 as backbones within 4 timesteps. Results in Tab. 2 show that the MPR can greatly improve performance. Especially on CIFAR-100, where ResNet20 with MPR increases the accuracy by 2.73%. These results verify the effectiveness of MPR in terms of performance improvement.
We also computed the average quantization error of the first layer of the second block in the ResNet20/19 before and after MPR on the test set of CIFAR-10(100), respectively. Results in Tab. 3 show that the quantization error is obviously reduced by the MPR. The overall original membrane potential distribution and modulated membrane potential distribution by MPR of the first layer of the second block in ResNet20 on CIFAR-10 and CIFAR-100 test sets are shown in Fig. <ref>. It shows that the MPR adjusts the membrane potential distribution near “0" and “1", which is closer to its quantization spike. Put together, these results quantitatively support the effectiveness of MPR in reducing quantization error.
§.§ Comparisons with Other Methods
Our method was further compared with other state-of-the-art SNNs on static and neuromorphic datasets. Results are shown in Tab. 4, where for each run, the mean accuracy and standard deviation of 3 trials are listed. For simplification, InfLoR (, short for Information Loss Reducing) is used to denote the combination of SRIF and MPR.
CIFAR-10(100).
For CIFAR-10, our method improves network performance across all commonly used backbones in SNNs. ResNet19-based InfLoR-SNN achieved 96.49% top-1 accuracy with 6 timesteps, which outperforms its STBP-tdBN counterpart with 3.33% higher accuracy and its ANN counterpart 0.20% higher accuracy even. The ResNet20-based InfLoR-SNN can reach to 93.65%, while only 92.54% in <cit.>. And our VGG16-based network also shows higher accuracy than other methods with fewer timesteps. On CIFAR-100, InfLoR-SNN also performs better and achieves a 1.89% increment on VGG16. Noteworthy, InfLoR-SNN significantly surpasses Diet-SNN <cit.> with 7.12% higher accuracy, which is not easy to achieve in the SNN field. Again, our ResNet19 also outperforms its ANN counterpart. To our best knowledge, it is the first time that the SNN can outperform its ANN counterpart.
ImageNet.
For the ImageNet dataset, ResNet18 and ResNet34 were used as the backbones. Results show that our ResNet18 achieves a 1.60% increment on SEW ResNet18 and a 2.46% increment on Spiking ResNet18. The accuracy of our ResNet34 does not exceed SEW ResNet34. However, SEW ResNet34 <cit.> transmits information with integers, which is not a typical SNN. For a fair comparison, we also report the result of Spiking ResNet34 in <cit.> which is worse than our method. Moreover, our InfLoR-based ResNet34 with 4 timesteps still obviously outperforms STBP-tdBN-based RersNet34 with 6 timesteps.
CIFAR10-DVS.
For the neuromorphic dataset, CIFAR10-DVS, InfLoR-SNN achieves the best performance with 75.50% and 75.10% top-1 accuracy in 10 timesteps with ResNet19 and ResNet18 as backbones, and obtains 7.80% improvement compared with STBP-tdBN for ResNet19. It's worth noting that, as a more complex model, ResNet19 only performs a little better than ResNet20 on CIFAR10-DVS. It might be that this neuromorphic dataset suffers much more noise than static ones, thus a more complex model is easier to overfit.
§ CONCLUSIONS
This work aims at addressing the information loss problem caused by the “Hard Reset" mechanism of neurons and the 0/1 spike quantification. Then, the SRIF model, which will drive the membrane potential to a dynamic reset potential, and the MPR that can adjust the membrane potential to a new value closer to quantification spikes than itself are proposed. A detailed analysis of why the SRIF and MPR can reduce the information loss is provided. Furthermore, abundant ablation studies of the proposed methods are given. Combining these two methods, our SNNs outperform other state-of-the-art methods.
splncs04
|
http://arxiv.org/abs/2307.04872v1 | 20230710194154 | The Synthesis Lab: Empowering Collaborative Learning in Higher Education through Knowledge Synthesis | [
"Xinran Zhu",
"Hong Shui",
"Bodong Chen"
] | cs.HC | [
"cs.HC",
"cs.CY"
] |
[email protected]
0003-0064-4861
University of Pennsylvania
Philadelphia
United States
[email protected]
University of Minnesota
Minneapolis
United States
[email protected]
University of Pennsylvania
Philadelphia
United States
The ability to synthesize information has emerged as a critical skill for success across various fields. However, within the field of education, there is a lack of systematic understanding and well-defined design infrastructures that address the mechanisms and processes of knowledge synthesis in collaborative learning settings. In this poster, we introduce a design innovation – The Synthesis Lab, which aims to support students in synthesizing ideas from their online discussions in higher education classrooms. The tool offers structured work-spaces for students to decompose the synthesis process into intermediate synthesis products and features two key iterative processes of knowledge synthesis in collaborative settings: categorizing peers’ ideas into conceptual building blocks and developing a synthesis of the discussions. Future implementation and evaluation of the design will make significant contributions to both research and practice.
<ccs2012>
<concept>
<concept_id>10003120.10003130.10003233</concept_id>
<concept_desc>Human-centered computing Collaborative and social computing systems and tools</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010405.10010489.10010492</concept_id>
<concept_desc>Applied computing Collaborative learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Human-centered computing Collaborative and social computing systems and tools
[500]Applied computing Collaborative learning
The Synthesis Lab: Empowering Collaborative Learning in Higher Education through Knowledge Synthesis
Bodong Chen
August 12, 2023
====================================================================================================
§ INTRODUCTION
In our ever-evolving world, where information flows incessantly amidst unprecedented technological advancements and the growth of artificial intelligence, the ability to synthesize information has emerged as a critical skill for success across various fields. Just as a scientist combines different reactants in a chemical experiment to create new substances, or a composer weaves melodies into a harmonious symphony, knowledge synthesis can be seen as both an art and a science. It involves skillfully and strategically weaving together diverse strands of information to foster conceptual innovation, generate novel knowledge, and design creative solutions <cit.>.
Knowledge synthesis is one important form of cognition in human learning and collaboration. In contrast to other cognitive processes such as interpreting and evaluating new information, synthesis-making requires efforts to rise above current levels of explanation which results in understanding phenomena on a higher plane and the creation of new concepts <cit.>. Within organizations or learning communities, the synthesis-making process becomes even more intricate when individuals engage in dynamic interactions, encountering a broad range of perspectives and resources, which all make the synthesis process challenging.
Research from various disciplines has examined processes or concepts related to knowledge synthesis in collaborative/cooperative settings from multiple perspectives. In CSCW, scholars have emphasized key components of scholarly knowledge synthesis, such as capturing context information and information reuse <cit.>. Similarly, in creativity research, researchers have used constructs that are closely related. Javadi and Fu <cit.> investigated “idea integration” in electronic brainstorming as a process for “adoption, exploitation, combination or synthesis” of multiple ideas. In information sciences, Robert et al. <cit.> refers to “knowledge integration” as “the synthesis of individual team members’ information and expertise through 'social interactions'.” These studies have shed light on the importance of effective knowledge synthesis in enhancing collaboration and improving outcomes in diverse domains.
Moreover, educational research has also touched upon concepts related to knowledge synthesis. DeSchryver <cit.> developed a framework for web-mediated knowledge synthesis which includes six strategies for individuals such as divergent keyword search, synthesis for meaning, in-the-moment insights, repurposing (e.g., engaging learners to evolve the original ideas with their own added value), reinforcement (e.g., justifying new ideas by revisiting the sources or further discussion with peers), and note-taking. Recent work framed synthesis as a “trans-disciplinary skill” that “encapsulate the ways in which creative people think.” <cit.> In CSCW’s sister field – Computer-Supported Collaborative Learning (CSCL), knowledge synthesis plays a crucial role in collaborative learning by helping students distill, connect, organize, and analyze the information to deepen their thinking. For example, the knowledge building model emphasizes the notion of “rise above” in knowledge building discourse to synthesize and build on previous ideas that leads to the development of novel knowledge <cit.>. However, there is a lack of systematic understanding regarding the mechanisms and processes of knowledge synthesis in CSCL. Important questions remain unanswered, necessitating both theoretical and empirical investigation in the field. For instance, how do students synthesize ideas generated in collaborative discourse? How can the knowledge synthesis process support learning and collaboration? And how can the synthesis be used to orchestrate various learning events? Such understanding is essential for informing the design of learning systems, technologies, and pedagogies to support effective knowledge synthesis.
To address this gap, we initiated a Design-Based Research <cit.> project, connecting theories and designs in CSCW and CSCL, to understand and support knowledge synthesis through a series of ongoing design innovations. Drawing on the socio-cognitive stances, this project aims to 1) support the knowledge synthesis process in CSCL through a series of design innovations, and 2) investigate the mechanism of knowledge synthesis in collaborative learning settings through empirical research. In this poster, we aim to showcase an early effort of this project, the design of a web application for supporting knowledge synthesis in college students’ online discussion activities – The Synthesis Lab. This application helps deconstruct the complex synthesis-making process into smaller building blocks and guides students through the key steps, including distilling, connecting, analyzing, rising above, and aggregating ideas generated from the discussions. These steps guide students to discover the interrelationship between peers’ posts other than the simply reply relationships, which leads to further rising above previous ideas and constructing coherent knowledge out of fragmentary information.
§ THE SYNTHESIS LAB
Informed by interdisciplinary literature, knowledge synthesis in the designed technology space is operationalized as a dynamic process encompassing the analysis and integration of ideas fostered through interactions with peers in digital environments. Situated in a collaborative setting, the overarching goal is to generate novel knowledge out of the conversation, while facilitating the orchestration of various learning activities and application scenarios. Serving as a tool for thinking, it nurtures higher order competences, such as creativity and collaboration, fostering a fertile ground for profound thinking and intellectual growth.
The Synthesis Lab (see Fig. <ref> for the interface and example user workflow) retrieves students’ online discussion data on a web annotation platform – Hypothesis (https://web.hypothes.is/), via its APIs. Hypothesis is a web annotation technology that allows users to collaboratively read, annotate, highlight, and tag on a shared document or web page. It has been widely used to support social reading in classrooms as a form of online discussion across universities <cit.>. For example, as part of their weekly routine, students engage with course readings by annotating the texts and responding to peers’ annotations prior to in-person class meetings.
Drawing inspiration from previous designs (e.g., <cit.>) and incorporating insights from interdisciplinary literature (e.g., <cit.>), The Synthesis Lab offers a structural framework to guide students’ synthesis process. The workflow within the tool revolves around two primary goals: categorizing peers’ ideas into conceptual building blocks (CBBs) <cit.> and developing a synthesis of the discussions. These goals are achieved through interaction across three vertically organized workspaces: Distill, Analyze, and Synthesize. This organization provides structured workspaces for students to decompose the tasks into intermediate synthesis products: in-source annotations, per-source summaries, and cross-source syntheses <cit.>. The design encourages students to fluidly navigate between these workspaces, allowing them to revisit annotations and thoughts iteratively, recognizing that the synthesis-making process is non-linear in nature.
§.§ Categorizing Peers’ Ideas into CBBs
Once students have selected the reading for analysis, they initiate the synthesis process by browsing through the class annotations. In the Distill column, students are able to filter annotations by key words, authors and tags. Meanwhile,
students start to analyze annotations by creating Annotation Groups in the Analyze column, where they categorize annotations into different categories following on various strategies. For instance, some students may opt to group annotations by “applications” or “methodology”, while others may group them based on semantic meanings. This step allows students to organize ideas into CBBs, which become the metadata and contextual information for future synthesis work <cit.>. Additionally, students jot down their thoughts in the "In-the-moment Notes" box to document the contextual information surrounding their decisions. This step encourages active analysis of peers’ ideas and the meaningful integration of concepts.
§.§ Developing a Synthesis of the Discussions
Following their analysis of individual annotations, this step prompts students to shift their attention to the Annotation Groups in order to identify connections or reconsider their grouping strategies. For example, they can merge two groups as a new group (combining CBBs to a higher level CBB) or transfer annotations from one group to another. This process encourages students to repurpose and reinforce their learning by ruminating over the categories and revisiting the annotations/notes <cit.>. Ultimately, students start the synthesis writing phase in the Synthesize column, drawing upon all their existing notes and activities to compose a comprehensive synthesis.
§ CURRENT IMPLEMENTATION AND FUTURE DIRECTIONS
Using a co-design approach, we are collaborating with instructors to develop class activities that were supported by The Synthesis Lab. To investigate the design enactment, we conducted a pilot study during the Spring 2023 semester in a graduate-level classroom at a large private university. Throughout the study, students actively engaged in social annotation activities on a weekly basis. Within each session, 2 to 3 students assumed the role of discussion leaders, facilitating in-class discussions. To effectively fulfill their role, the discussion leaders were required to synthesize the class annotations in preparation for the meetings. For this study, discussion leaders who volunteered to participate utilized The Synthesis Lab to support their synthesis process.
We collected various learning artifacts from the participants, including their annotations and synthesis writings. Additionally, we conducted follow-up interviews with three participants to gain further insights. Upon preliminary analysis of the collected data, we noticed that the synthesis strategies employed by students varied. For instance, one student adopted a deductive approach by initially creating Annotation Groups based on the abstract, and then assigning relevant annotations to these pre-defined groups. Conversely, another student employed an inductive approach, generating CBBs while reading annotations within the Distill space in an iterative manner. Furthermore, the analysis highlights the potential of this design in fostering a more profound understanding of the value of knowledge synthesis among students. It also demonstrates the capacity of the design to enhance students' synthesis skills and promote collaborative learning.
The first design iteration is currently in progress and is expected to be completed by Fall 2023. Our focus is on enhancing the interactivity between different workspaces, including the implementation of backlinks to maintain the connection between annotations and In-the-Moment Notes. Additionally, we are actively exploring how artificial intelligence (AI) can augment the synthesis process to further enhance the students' experience.
The expected contribution of this work will be three-fold. First, the proposed technology innovation has great potential for broader applications. Further, the investigation of the design's implementation aims to develop a framework of knowledge synthesis in collaborative learning that will make significant contributions to both research and practice. Finally, leveraging the synergies between CSCW and CSCL allows for a deeper understanding of the interplay between technology, social dynamics, and learning constructs within the knowledge synthesis process. This understanding, in turn, allows for the creation of meaningful design infrastructures that can contribute to both fields advancing our understanding of learning and collaboration processes, optimizing technology-supported interactions, and fostering creative knowledge creation.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.04506v2 | 20230710115703 | Distributed Decisions on Optimal Load Balancing in Loss Networks | [
"Qiong Liu",
"Chehao Wang",
"Ce Zheng"
] | eess.SP | [
"eess.SP"
] |
Distributed Decisions on Optimal Load Balancing
in Loss Networks
Qiong Liu1, Chenhao Wang2, Ce Zheng1
1Télécom Paris, Institut Polytechnique de Paris, France
2Beijing Normal University, China
Email: [email protected], [email protected], [email protected]
==========================================================================================================================================================================================================================
When multiple users share a common link in direct transmission, packet loss and link congestion may occur due to the simultaneous arrival of traffics at the source node. To tackle this problem, users may resort to an indirect path: the packet flows are first relayed through a sidelink to another source node, then transmitted to the destination. This behavior brings the problems of packet routing or load balancing: (1) how to maximize the total traffic in a collaborative way; (2) how self-interested users choose routing strategies to minimize their individual packet loss independently.
In this work, we propose a generalized mathematical framework to tackle the packet and load balancing issue in loss networks. In centralized scenarios with a planner, we provide a polynomial-time algorithm to compute the system optimum point where the total traffic rate is maximized. Conversely, in decentralized settings with autonomous users making distributed decisions, the system converges to an equilibrium where no user can reduce their loss probability through unilateral deviation. We thereby provide a full characterization of Nash equilibrium and examine the efficiency loss stemming from selfish behaviors, both theoretically and empirically. In general, the performance degradation caused by selfish behaviors is not catastrophic; however, this gap is not monotonic and can have extreme values in certain specific scenarios.
load balancing, Nash equilibria, price of anarchy, network congestion, sidelink
§ INTRODUCTION
Since the seminal work of Erlang <cit.>, loss networks have played a crucial role in analyzing and optimizing stochastic systems involving simultaneous resource utilization, and non-backlogging workloads (for an extensive overview, see <cit.>). Meanwhile, in the post-5G era, cloud-enabled networks have emerged as a dominant architecture, where multiple servers collect data from users and relay it to a central hub for final processing. To guarantee network efficacy, that is no server is either overburdened or underutilized, load balancing strategies are well studied, e.g., <cit.>. In this context, loss networks provide valuable mathematical frameworks for comprehending and enhancing load distribution within cloud-enabled networks.
Early load balancing research for cloud-enabled networks focused on centralized scenarios, where a centralized planner scheduled workloads to optimize aspects like performance-energy tradeoffs <cit.> and algorithmic considerations <cit.>. However, due to the stringent latency requirement for real-time decisions and the increasing signaling overhead caused by the large-scale deployment of servers and massive users, distributed decisions become a better solution. In this context, the complexity of the problem increases due to the non-cooperative and competitive behaviors among users within the system.
To address the challenges of load balancing in a distributed way, game theory provides a mathematical framework that describes and analyzes scenarios with interactive decisions <cit.>. Till now, some studies have demonstrated the efficacy of game-theoretic models in addressing load balancing problems. For instance, Mondal et al. <cit.> developed a game-theoretic model for load balancing among competitive cloudlets, while Yi et al. <cit.> investigated a similar problem, incorporating additional considerations of queue-aware strategies. In <cit.>, symmetric loss models where each source has an equal number of users are considered. However, previous studies mostly focused on limited cases of identical user strategies, which may not reflect real-world scenarios, i.e., different users may have different objectives and preferences. Therefore, further research is needed to develop game-theoretic models that can address the challenges of load balancing in a more general and realistic manner.
In this paper, we employ game theory to address load balancing in both distributed and centralized environments, where users have non-identical strategies and the number of users is not evenly distributed. Specifically, we consider the load balancing in a cloud-enabled network consisting of m source nodes (servers) {s_1,…,s_m} and one destination node (central hub) d. Each source s_i has n_i users seeking service, and the traffic originating from each user is assumed to follow an independent Poisson point process with an identical rate. The nodes in the network are connected by two types of communication links, namely sidelinks that connect two sources, and direct links that connect a source and destination. The sidelink has a random identical independent distribution (i.i.d) loss with a fixed probability q, and the direct link has a congestion loss that depends on the arrival rate and service rate of each server.
The user cannot split its traffic, and has to determine how to route all of its traffic from the source node arrived at to the destination node. There are two approaches for the traffic transmission: a direct path (DP) in which the packet goes directly from the source arrived at to the destination, and an indirect path (IP) in which the packet is first relayed to another source node and then takes the direct link from that node to the destination.
We treat packet loss probability as the performance metric in load balancing, instead of additive costs like delay or fees in classical routing games <cit.>, resulting in a non-additive and non-convex optimization process. Each user aims to minimize its own loss probability and engage in a game by strategically selecting its own path. In the end, no user can reduce its loss probability by unilateral deviation and reaches the state of Nash Equilibrium (NE).
§.§ Our Contributions
Our work contributes to the load balancing game in the following aspects: First, we prove two lemmas related to the optimal solution when a centralized planner exists. Based on these lemmas, a low-complexity algorithm that maximizes the total traffic is proposed.
Second, we study the decentralized environment where decisions are made by autonomous and self-interested users. The sufficient and necessary conditions on NE are derived, which depend on the number of users on direct path and each indirect path.
Moreover, since a NE may be suboptimal, we use the price of anarchy (PoA) <cit.> to measure the gap between the NE led by users' selfish behaviors and the system optimum achieved by the centralized planner.
The rest of the paper is structured as follows. The formal model and notations are presented in Section <ref>. In Section <ref>, we provide details to compute the optimal solution that maximizes the total traffic when a centralized planner exists. In Section <ref>, we study the NE in the decentralized decision-making scenarios, and analyzed the efficiency loss stemming from selfish behaviors. In Section <ref>, a fine-grained analysis is performed on the existence of NE in various network configurations for a specific scenario involving two source nodes.
Numerical results are presented and discussed in Section <ref>. Finally, Section <ref> concludes the paper and outlines some future work.
§.§ Other related works
Routing games.
As a special class of congestion games, routing games in a network are problems of routing traffic to achieve
the best possible network performance, and have been studied within various contexts
and within various communities, for example, the mathematics community <cit.>, the telecommunications <cit.>, and theoretical computer science <cit.>. The above references have all in common a cost framework which is additive over links,
such as delays or tolls, and is flow conserving (the amount entering a node equals the amount leaving it). Routing games with non-additive cost in loss networks are studied in <cit.>.
Braess-like paradox in distributed systems.
The Braess-like paradox is said to occur in a network system with distributed behaviors if adding an extra link or adding communication capacity to the system leads to a worse system performance. It widely exists in transportation networks and queuing networks. Bean et al. <cit.> show that it can occur in loss networks.
Kameda et al. <cit.> consider a model similar to ours in that a job (packet) can be processed directly or indirectly; however, they do not consider the loss probability. They identify a Braess-like paradox in which adding capacity to the channel may degrade the system performance on the response time.
Kameda and Pourtallier <cit.> characterize conditions under which such paradoxical behavior occurs, and give examples in which the degradation of performance may increase without bound.
§ MODEL AND PRELIMINARIES
We abstractly model our problem using a graph. Consider a network with m source nodes S={s_1,…,s_m} and one destination node d. For each source node s_i∈ S, let N_i be the set of users arriving at s_i, and n_i=|N_i| be the number of such users. Without loss of generality, we assume n_1 > n_2 >…>n_m. Denote [m]={1,…,m}. There is a total of n=∑_i∈[m]n_i users in the system, who are self-interested players in the game. We say players and users interchangeably throughout this paper. Each user is identified with a flow (or traffic) of packets, which originates from the user and is assumed to form an independent Poisson process with an identical rate ϕ. See Fig. <ref> for illustration.
Each user controls its route that all its packets should follow. For a user associated with s_i∈ S, there are only two types of routes to ship these packets to the destination d: either a direct path (DP) (s_i,d), or an indirect two-hop path (IP) (s_i,s_j,d) for some s_j≠ s_i, in which the packet is first sent to another source s_j by the side link (s_i,s_j), and then passes through the direct link (s_j,d).
Strategies. For every source s_i, each user k∈ N_i decides a one-shot strategy 𝐩_k^(i)=( p_k1^(i),…,p_km^(i))^T∈ [0,1]^m with ∑_j∈[m]p_kj^(i)=1, where p_ki^(i) is the probability of routing all packets through DP, and p_kj^(i) (j∈[m],j≠ i) is the probability of routing all packets through IP (s_i,s_j,d).
When no confusion arises, we simply write the strategy 𝐩_k^(i) as 𝐩_k.
We focus on pure strategies in this paper: a strategy 𝐩_k is pure if 𝐩_k_∞=1, i.e., user k deterministically selects a route with probability 1 (for example, 𝐩_k=(0,1,0,…,0)^T).
Let 𝐩=(𝐩_1,…,𝐩_n) be the strategy profile of all users.
Loss probability and loss rate.
There are two types of losses: (1)
Losses on side links. We assume that a packet originating from node s_i and relayed to node s_j is lost with a fixed probability q for every side link (s_i,s_j), independently of any other loss. Denote by q̅=1-q the probability that a packet is successfully relayed. (2) Congestion losses on direct links. We assume that there is no buffer to restore the backlogged packets, so a packet will be lost when it enters the direct path which is occupied for the transmission of another packet. The transmission time of a packet on a direct link (s_i,d) is a random variable σ following a distribution 𝒳, which is assumed to be an identically independent distribution (i.i.d) for all packets.
Given strategy profile 𝐩, user k∈ N_i continuously sends packets that follow an independent Poisson process with rate p_ki^(i)·ϕ to DP (s_i,d), and an independent Poisson process of packets with rate p_kj^(i)·ϕ to IP (s_i,s_j,d), for any s_j≠ s_i.
Since there is a random
loss on the side link (s_i,s_j), the flow of packets from user k∈ N_i that arrive at the node s_j is also a Poisson process with rate q̅p_kj^(i)ϕ.
Thus, for each source s_i∈ S, the flow over the link (s_i,d) is Poisson distributed with a
traffic rate T_i(𝐩) given by
T_i(𝐩)=∑_k∈ N_ip_ki^(i)ϕ + ∑_j∈[m]\{i}∑_k∈ N_j p_ki^(j)q̅ϕ.
When no confusion arises, we simply write T_i(𝐩) as T_i.
The probability of no congestion loss on the direct link (s_i,d) equals the probability that there is no arrival during a transmission time σ, which is given by
*𝔼_σ∼𝒳e^-T_iσ.
As usual practice, assume 𝒳 is an exponential distribution with a rate parameter μ (service rate) and mean 1/μ. Thus the probability of no congestion loss on (s_i,d) is
*𝔼_σ∼𝒳e^-T_iσ=∫_0^+∞μ e^-μσ e^-T_iσdσ=μ/T_i+μ,
and the loss probability on link (s_i,d) is T_i/T_i+μ.
Given the strategy profile 𝐩, for s_i∈ S and k∈ N_i, the loss rate of user k is defined as
LR_k(𝐩) =[ p_ki^(i)T_i/T_i +μ+( 1 - p_ki^(i)) q +
( 1 - q) ∑_j∈[m]\{i} p_kj^(i)T_j/T_j +μ] ϕ,
and the loss probability of user k is LR_k(𝐩)/ϕ.
Total traffic. Regarding the system efficiency, we measure it by the total traffic rate arriving at the destination d. Given the strategy profile 𝐩,
the total traffic rate TR(𝐩) of the system can be derived in two ways. The first expression is derived as the summation of successful transmission rates on direct links:
TR(𝐩)=∑_i∈ [m]T_i·μ/T_i+μ
=μ[ m-∑_i∈ [m]μ/∑_k∈ N_i p_ki^(i)ϕ +∑_ j∈[m] \{i}∑_k∈ N_jp_ki^(j)q̅ϕ+μ]
where T_i is the traffic rate over link (s_i,d), and μ/T_i+μ is the probability of no congestion loss on (s_i,d).
The second expression is from users' perspective:
TR(𝐩) :=∑_i∈ [m]∑_k∈ N_i(ϕ-LR_k),
where ϕ-LR_k(𝐩) is the traffic rate of user k∈ N_i that successfully arrive at d. It is not hard to see that (<ref>) and (<ref>) are equivalent.
Nash equilibria. A Nash equilibrium (NE) is a strategy profile where no player can decrease its loss probability by unilaterally deviating to any other strategy. Formally, we give a definition.
A strategy profile 𝐩 is a Nash equilibrium, if for any source s_i∈ S and any player k∈ N_i, we have
LR_k(𝐩_k,𝐩_-k)≤ LR_k(𝐩_k',𝐩_-k),
where 𝐩_k' can be any feasible strategy of player k, and 𝐩_-k is the strategy profile of all other players.
We measure the efficiency of NEs by the price of anarchy (PoA) <cit.>, which is defined as the ratio between social efficiencies in an optimal solution and in the worst NE.
Formally, given an instance Γ of this game, we define
PoA(Γ)=TR(opt)/min_𝐩∈ℕ𝔼TR(𝐩).
where opt is an optimal solution of Γ, and ℕ𝔼 is the set of all NEs. The PoA of the whole game is defined as the maximum over all instances, that is, PoA=max_ΓPoA(Γ).
§ CENTRALIZED ANALYSIS
The main technical results of the paper are presented now. We show how to compute an optimal solution that maximizes the total traffic.
Note that the total traffic rate depends on the number of users working on each source by DP or IP, but not the users' identity.
Given a strategy profile 𝐩, let u_i=|{k∈ N_i | p_ki^(i)=1}| be the number of users working with DP (s_i,d), and let v_i=|{k∈ N_j, j ∈ [m] \i| p_ki^(j)=1}| be the users working with IP through link (s_i,d). Define y_i=u_i+v_i as the number of users who choose source s_i (including both DP and IP).
In any optimal solution, for any source s_i, either u_i=n_i or v_i=0 or both hold.
Let 𝐩 be an optimal solution. Suppose for contradiction that u_i<n_i,v_i>0 for some source s_i. Then there exists a user (say, k) in N_i who chooses IP (say, (s_i,s_i',d) for some i'≠ i). Also, since v_i>0, there exist a source s_j≠ s_i and a user l∈ N_j who chooses IP (s_j,s_i,d). The total traffic rate is TR(𝐩)=μ T_i/T_i+μ+∑_w∈ [m]\{i}(μ T_w/T_w+μ).
Now we show that the total traffic rate can be improved by revising 𝐩. Let user k∈ N_i choose DP, and let user l∈ N_j choose IP (s_j,s_i',d). Fixing all others' strategies, denote the new strategy profile by 𝐩', and define u_i',v_i' accordingly. Note that u_i'=u_i+1,v_i'=v_i-1, and T_w(𝐩')=T_w(𝐩) for all source s_w≠ s_i. Since q>0, we have
T_i(𝐩')=(u_i+1)ϕ+(v_i-1)q̅ϕ> u_iϕ+v_iq̅ϕ=T_i(𝐩).
So TR(𝐩')>TR(𝐩), contradicting to the optimality.
Lemma <ref> indicates that if a source (say s_i) provides service to the users of other sources, then all users of s_i choose DP.
In any optimal solution, there must exist ĩ∈[m], such that v_l=0 for all l≤ĩ, and u_j=n_j for all j>ĩ.
Given an optimal solution 𝐩, suppose for contradiction that there exist i,j∈[m] (i<j) such that v_i>0 and u_j<n_j. By Lemma <ref>, we have u_i=n_i and v_j=0. There exists a source s_i' and a user k∈ N_i' selecting IP (s_i',s_i,d). There exists a source s_j' (j'≠ j) and a user k'∈ N_j selecting IP (s_j,s_j',d). Note that when i'=j and j'=i, users k and k' may coincide. The total traffic rate is
TR(𝐩) =μ T_i/T_i+μ+μ T_j/T_j+μ+∑_w∈ [m]\{i,j}(μ T_w/T_w+μ)
=μ(2-1/T_i+μ-1/T_j+μ)+∑_w∈ [m]\{i,j}(μ T_w/T_w+μ).
Now we show that the total traffic rate can be improved by revising 𝐩. Let user k choose IP (s_i',s_j',d) if i'≠ j' and choose DP (s_i',d) if i'=j'. Let user k'∈ N_j choose DP. Fixing all others’ strategies, denote the new strategy profile by 𝐩', and define u_i',v_i' accordingly. Note that v_i' = v_i-1, u_j' = u_j+1, and T_w(𝐩')=T_w(𝐩) for all other sources s_w ≠ s_i,s_j. Since i < j, it follows that n_i ≥ n_j > u_j. Therefore, we have
1/T_i+μ+1/T_j+μ=1/n_iϕ+v_iq̅ϕ+μ+1/u_jϕ+μ
> 1/n_iϕ+v_i'q̅ϕ+μ+1/u_j'ϕ+μ
=1/T_i'+μ+1/T_j'+μ,
which indicates that TR(𝐩)<TR(𝐩'), a contradiction.
Lemma <ref> shows that there exists a threshold ĩ:
1) if i>ĩ, all users from s_i chose DP;
2) if i≤ĩ, portion users chose DP, and portion users chose IP.
Now we are ready to present Algorithm <ref>. The main idea is searching for ĩ in Lemma <ref>. For each candidate of ĩ, let B be the number of users selecting IP, all of whom come from L={s_l | l≤ĩ}, and go to R={s_j | j>ĩ}. For every possible value of B, we compute the best possible way for extracting the B users from L and distributing them over R.
In Algorithm <ref>, step (a) is to make T_l (and thus no congestion probability μ/T_l+μ) as equal as possible for l∈[ĩ]. This can be realized by initializing u_l=n_l, and then removing players one by one from the highest u_l and updating until B players have been removed. The goal of step (b) is to make T_j (and thus μ/T_j+μ) as even as possible. This can be realized by initializing v_j=0, then adding users one by one to v_j'=min_j>ĩn_jϕ+v_jq̅ϕ+μ and updating, until B players have been added. These two steps guarantee that the B loads are distributed in an optimal way to maximize the traffic rate.
Though the output of the algorithm is (u_i^*,v_i^*)_i∈ N, we can easily extend it to a corresponding strategy profile because the situation for each source s_i has been determined. Next we prove that its optimality.
Algorithm <ref> returns an optimal solution for maximizing the total traffic, and runs in O(mn^2) time.
In the first loop, we traverse all indexes in [m] to find the ĩ in Lemma <ref>. In the second loop, we traverse all possible numbers of users who select IP, and given any such a number B, we extract the B users from {s_l | l≤ĩ} and distribute them over {s_j | j>ĩ} in an optimal way to maximize the traffic rate. So all possible optimal solutions have been searched by the algorithm, giving the optimality.
For the time complexity, we have m iterations in the first loop, at most n iterations in the second loop, and the time for each iteration is O(n).
Intuitively, when the transmission loss probability is sufficiently large, all packets should go through DP; when there is no transmission loss, the load of packets should be distributed evenly over all sources. We verify the intuition as follows.
If q=1, the unique optimal solution is that all users choose DP (i.e., u_i=n_i, v_i=0, ∀ i∈ M). If q=0, a strategy profile 𝐩 is optimal if and only if |y_i-y_j|≤ 1 for all i,j∈ [m].
If q=1, TR=∑_i∈[m]μ u_iϕ/u_iϕ+μ is increasing with respect to every u_i. By the monotonicity, the optimum is achieved when u_i=n_i. If q=0, suppose for contradiction that there exist i,j∈ [m] in an optimal solution 𝐩 such that y_i-y_j≥ 2. The total traffic rate is TR=μ(m-∑_k∈ [m]μ/y_kϕ+μ). Consider a new strategy profile 𝐩' with y_i'=y_i-1,y_j'=y_j+1, i.e., a user who chooses source s_i deviates to s_j. Then the total traffic rate becomes TR'=μ(m-∑_k∈[m]\{i,j}μ/y_kϕ+μ-μ/y_i'ϕ+μ-μ/y_j'ϕ+μ)>TR, a contradiction.
§ DECENTRALIZED ANALYSIS
In this section, we study the Nash equilibria in the decentralized decision-making scenario where each user makes a decision on the choice of DP or IP.
§.§ Characterization of NEs
A NE should satisfy that: for a user selecting DP, its loss rate will not decrease if it deviates to any IP; for a user selecting IP, its loss rate will not decrease if it deviates to DP or another IP.
We formalize it as the following characterization.
Given an arbitrary strategy profile 𝐩 with (u_i,v_i)_i∈ [m], let i^*∈min_i∈[m]{u_i+v_iq̅}, and let x_ij∈{0,1} be an indicator where x_ij=1 if there exists at least one user selecting IP (s_i,s_j,d). Then, 𝐩 is a NE, if and only if the following conditions are satisfied:
(i) for all i∈[m] with u_i>0,
we have
q̅(u_i+v_iq̅)≤ u_i^*+v_i^*q̅+q̅+qμ/ϕ;
(ii) for all i,l∈[m] with x_il=1, we have
u_l + v_lq̅≤min{q̅( u_i + 1 + v_iq̅) -qμ/ϕ, u_i^*+ v_i^*q̅+q̅}
Proof sketch: Suppose 𝐩 is a NE. Consider any source s_i∈ S and user k∈ N_i.
Case 1. In its NE, user k selects DP in 𝐩 (denoted as 𝐩_k^(i) where p_ki^(i)=1). If it deviates to IP (s_i,s_j,d) where j ≠ i (denoted as 𝐩'_k^(i) where p_kj^i=1), by Definition <ref>, we have LR_k(𝐩_k^(i),𝐩_-k)≤ LR_k(𝐩'_k^(i),𝐩_-k), equivalent to (<ref>).
Case 2. In its NE, user k selects IP (s_i,s_j,d) in 𝐩. If it deviates to DP, it leads to the first part of (<ref>). If it deviates to another IP, it leads to the second part of (<ref>).
Suppose 𝐩 is a NE. Consider an arbitrary source s_i∈ S and arbitrary user k∈ N_i.
Case 1. User k selects DP in 𝐩 (denoted as 𝐩_k^(i) where p_ki^(i)=1). If it deviates to IP (s_i,s_j,d) where j ≠ i (denoted as 𝐩'_k^(i) where p_kj^i=1), by Definition <ref>, we have LR_k(𝐩_k^(i),𝐩_-k)≤ LR_k(𝐩'_k^(i),𝐩_-k). It is equivalent to
1-μ/u_iϕ+v_iq̅ϕ+μ≤ q+q̅·(1-μ/u_jϕ+(v_j+1)q̅ϕ+μ)
⇔ q̅/u_jϕ+(v_j+1)q̅ϕ+μ≤1/u_iϕ+v_iq̅ϕ+μ
⇔ q̅(u_i+v_iq̅)-qμ/ϕ≤ u_j+(v_j+1)q̅,
The above inequality should hold for all j≠ i, and thus is equivalent to Equation (<ref>).
Case 2. User k selects IP (s_i,s_l,d) in 𝐩. If it deviates to DP (s_i,d), by Definition <ref>, we should have LR_k(𝐩_k^(i),𝐩_-k)≤ LR_k(𝐩'_k^(i),𝐩_-k). It is equivalent to
q+q̅·(1-μ/u_lϕ+v_lq̅ϕ+μ)≤ 1-μ/(u_i+1)ϕ+v_iq̅ϕ+μ
⇔ 1/(u_i+1)ϕ+v_iq̅ϕ+μ≤q̅/u_lϕ+v_lq̅ϕ+μ
⇔ u_l+v_lq̅≤q̅(u_i+1+v_iq̅)-qμ/ϕ.
Moreover, a NE must guarantee that user k will not deviate to another IP (s_i,s_j,0), and thus we should have
1-μ/u_lϕ+v_lq̅ϕ+μ≤ 1-μ/u_jϕ+(v_j+1)q̅ϕ+μ
⇔ u_l+v_lq̅≤ u_j+(v_j+1)q̅.
Note that the above inequality should hold for all j≠ i,l. Therefore, we obtain Equation (<ref>).
§.§ Price of Anarchy
We investigate the price of anarchy in this section, which measures the efficiency of NE. We give an upper bound on the optimal total traffic rate, and a lower bound on the total traffic rate of any NE.
In an optimal solution 𝐩,
the total traffic rate is TR_(𝐩)≤μ m (1-μ/n_1ϕ+μ).
Let i be the index stated in Lemma <ref>. It suffices to show with the proof by contradiction that in the optimal solution 𝐩, u_i+v_iq̅≤ n_i. First, for i=1, we have v_1=0, and thus it satisfies u_1+v_1q̅=u_1≤ n_1. For any i>1, suppose for contradiction that u_i+v_iq̅>n_i. Then v_i>0, and there exists a source s_j and a user k∈ N_j that chooses the IP (s_j,s_i,d), i.e., u_j<n_j. By Lemma <ref>, it must be v_j=0, and thus T_j=u_jϕ. Denote the strategy as 𝐩 with p_ki^(j)=1.
The total traffic rate is
TR(𝐩)=μ( m - μ/T_j + μ - μ/T_i + μ - ∑_w∈[m]\{j}μ/T_w + μ)
We show that the total traffic rate can be improved with user k ∈ N_j deviating from IP (s_j, s_i, d) to DP (s_j,d).
Fixing the strategies of all others, denote by 𝐩' the new strategy profile, and define (u'_w,v'_w,T'_w)_w∈[m] accordingly. Note that u_j'=u_j+1, v_i'=v_i-1, u_i'=u_i, and T_w'=T_w for any w∈[m]\{j}. Since u_i+v_iq̅>n_1≥ n_j≥ u_j, we have
1/T_j+μ+1/T_i+μ=1/u_jϕ+μ+1/u_iϕ+v_iq̅ϕ+μ
> 1/u_j'ϕ+μ+1/u_i'ϕ+v_i'q̅ϕ+μ= 1/T_j'+μ+1/T_i'+μ.
It indicates that TR(𝐩')>TR(𝐩) is a contradiction. Consequently, u_i+v_iq̅≤ n_i ≤n_1. According (<ref>), we have TR_(𝐩)≤μ m(1-μ/n_1ϕ+μ).
Let z=min{n_m,n/4m-q̅-qμ/ϕ}. For every NE 𝐩, the total traffic rate satisfies TR(𝐩)≥μ(m-mμ/zϕ+μ).
Let i^*=min_i∈[m]{u_i+v_iq̅}. Since TR(𝐩)≥μ m(1-μ/(u_i^*+v_i^*q̅)ϕ+μ), it suffices to prove that u_i^*+v_i^*q̅≥ z. If u_i^*+v_i^*q̅≥ n_m, it is done. We only need to consider the case when u_i^*+v_i^*q̅< n_m≤ n_i^*. There exists some users in N_i^* selecting IP. By Equation (<ref>), we have
u_i^*+v_i^*q̅≤q̅(u_i^*+1+v_i^*q̅)-qμ/ϕ.
By Theorem <ref>, for each i∈[m], if u_i>0, then q̅(u_i+v_iq̅)≤ u_i^*+v_i^*q̅+q̅+qμ/ϕ; if v_i>0, then u_i+v_iq̅≤ u_i^*+v_i^*q̅+q̅. In both cases, we obtain u_i+v_i/2≤ 2(u_i^*+v_i^*q̅+q̅+qμ/ϕ).
Summing up over all i∈[m], we have
n/2≤∑_i∈[m](u_i+v_i/2)≤ 2m (u_i^*+v_i^*q̅+q̅+qμ/ϕ),
which implies that u_i^*+v_i^*q̅≥n/4m-q̅-qμ/ϕ.
For any instance with m sources, the price of anarchy is PoA≤ 1+n_1μ/n_1zϕ+zμ, where z=min{n_m,n/4m-q̅-qμ/ϕ}.
Combining the upper bound in Lemma <ref> and the lower bound on TR(𝐩) for any NE 𝐩 in Lemma <ref>, it follows
PoA ≤m - mμ/n_1ϕ + μ/m - mμ/zϕ + μ = n_1(zϕ+μ)/z(n_1ϕ+μ) = 1 + n_1μ/n_1zϕ + zμ.
§ A PARTICULAR CASE: TWO SOURCES
In this section, we focus on the special case of m=2. That is, there are only two sources s_1 and s_2. Assume w.l.o.g. that n_1≥ n_2. For each user k ∈ N_i, there is only one IP. Accordingly, its strategy becomes 𝐩_k^(i) = (p_k1^(i),p_k2^(i)), i = 1, 2.
And we have
n_1 = u_1 + v_2;
n_2 = u_2 + v_1.
The traffic rate T_i(𝐩) in (<ref>) is rephrased as
T_i(𝐩) = ∑_k∈ N_ip_ki^(i)ϕ + ∑_k∈ N_j, j≠ i p_ki^(j)q̅ϕ = u_i ϕ + v_i q̅ϕ.
Given strategy profile 𝐩, the set N is further partitioned into 4 subsets (V_1,V_2,V_3,V_4) where V_1={k∈ N_1 | 𝐩_k^(1) =(1, 0)}, V_2={k∈ N_1 | 𝐩_k^(1) = (0,1)}, V_3={k∈ N_2 | 𝐩_k^(2)=(0,1)} and V_4={k∈ N_2 | 𝐩_k^(2)=(1,0)}.
Clearly, users in V_1 and V_3 choose DP, and users in V_2 and V_4 choose IP.
Suppose 𝐩 is a NE. We study the deviation of users in V_1,V_2,V_3,V_4, respectively.
For user k∈ V_1, the strategy is 𝐩_k^(1) = (p_k1^(1), p_k2^(1))=(1, 0), and the loss rate in (<ref>) is
LR_k( 𝐩 )
=ϕ T_1(𝐩)/T_1(𝐩)+μ = [1 - μ/ϕ/u_1 + v_1q̅ + μ/ϕ] ϕ.
When user k∈ V_1 deviates to IP, the strategy profile becomes 𝐩' = ( 𝐩'_k^(1), 𝐩_-k) where 𝐩_k'^(1) =(0, 1).
The loss rate of user k becomes
LR_i(𝐩') = qϕ+q̅ϕT_2(𝐩')/T_2(𝐩') +μ=[1 -q̅μ/ϕ/u_2 + (v_2 + 1)q̅+μ/ϕ] ϕ.
Since 𝐩 is NE, k has no incentive to deviate, and thus LP_k(𝐩)≤ LR_k(𝐩'),
which is equivalent to
t_1(u_2) : = qμ/ϕ+ u_2(1 +q̅^2) + (n_1 + 1)q̅-n_2q̅^2/2q̅≥ u_1,
where t_1(u_2) is a function with respect to variable u_2.
For user k∈ V_2 with strategy 𝐩_k^(1) =(0, 1), the loss rate is
LR_k(𝐩) = qϕ + q̅ϕT_2(𝐩)/T_2(𝐩) + μ = [1 - q̅μ/ϕ/u_2 + v_2q̅ + μ/ϕ] ϕ.
When user k∈ V_2 deviates to DP, the strategy profile becomes 𝐩' = ( 𝐩'_k^(1), 𝐩_-k) where 𝐩_k'^(1)=(1, 0).
The loss rate of k becomes
LR_k(𝐩') = ϕ T_1(𝐩')/T_1(𝐩') + μ = [ 1 - μ/ϕ/u_1+1+v_1q̅+μ/ϕ]ϕ.
Since 𝐩 is NE, we have LR_k(𝐩)≤ LR_k(𝐩'), that is,
u_1≥qμ/ϕ+ u_2(1 +q̅^2) + (n_1 - 1)q̅- n_2q̅^2/2q̅ = t_1(u_2) - 1.
Symmetrically, for each user k∈ V_3 and k∈ V_4, since 𝐩 is NE, we have
t_2(u_1)-1 ≤ u_2 ≤ t_2(u_1),
where
t_2(u_1) := qμ/ϕ + u_1(1+q̅^2)+(n_2+1)q̅ - n_1q̅^2/2q̅.
Note that Eq. (<ref>) - (<ref>) are the sufficient and necessary conditions for an arbitrary strategy 𝐩 to achieve NE.
Now we are ready to give a characterization of NEs.
Let 𝐩 be an arbitrary strategy profile for the game with two sources. Let u_1 and u_2 be the number of users in N_1 and N_2 who choose DP under 𝐩, respectively. We have
[1] when (a) u_1=n_1,u_2<n_2, or (b) u_1=0,u_2>0, 𝐩 cannot be a NE;
[2] when u_1∈[0,n_1),u_2∈[0,n_2), 𝐩 is NE if and only if u_1≥ t_1(u_2)-1 and u_2≥ t_2(u_1)-1;
[3] when u_1∈(0,n_1),u_2=n_2, 𝐩 is NE if and only if u_1∈[t_1(u_2)-1,t_1(u_2)];
[4] when u_1=n_1,u_2=n_2, 𝐩 is NE if and only if n_1q̅≤ qμ/ϕ+n_2+q̅.
Given 𝐩, let (V_1,V_2,V_3,V_4) be a partition of N as defined above. We discuss the four cases.
Case 1. When (a) u_1=n_1 and u_2<n_2, V_4 is nonempty. If 𝐩 is a NE, it must satisfy t_2(u_1)-1≤ u_2.
However, because q̅u_2≤ n_1,q̅u_2≤ (n_2-1)q̅ and qμ/ϕ>0, it cannot hold.
When (b) u_1=0 and u_2>0, V_2 is nonempty. If 𝐩 is a NE, it must satisfy Eq. (<ref>), that is, u_1≥ t_1(u_2)-1. It follows that 0=2q̅u_1≥ qμ/ϕ+u_2(1+q̅^2)+(n_1-1)q̅-n_2q̅^2≥ qμ/ϕ+1+(n_1-1)q̅-(n_2-1)q̅^2≥ qμ/ϕ+1>0, a contradiction.
Case 2. When u_1∈[0,n_1),u_2∈[0,n_2), V_2 and V_4 are nonempty. It is easy to see that 𝐩 is NE if and only if u_1≥ t_1(u_2)-1 and u_2≥ t_2(u_1)-1 are satisfied simultaneously.
Case 3. When u_2=n_2,u_1∈(0,n_1), V_1,V_2,V_3 are nonempty, and V_4 is empty. 𝐩 is NE if and only if t_1(u_2)-1≤ u_1≤ t_1(u_2) and u_2≤ t_2(u_1) hold simultaneously. Moreover, note that u_2≤ t_2(u_1) is implied by u_1≥ t_1(u_2)-1. Therefore, the sufficient and necessary condition for NE is u_1∈[t_1(u_2)-1,t_1(u_2)].
Case 4. When u_1=n_1,u_2=n_2, V_1,V_3 are nonempty, and V_2,V_4 are empty. 𝐩 is NE if and only if u_1≤ t_1(u_2) and u_2≤ t_2(u_1). It is easy to see that, it is equivalent to n_1q̅≤ qμ/ϕ+n_2+q̅.
Note that every situation of u_1,u_2 is included in the above four cases. So we complete a characterization.
Case 4 can be intuitively explained by considering the sidelink loss probability q over link (s_1,s_2). If q is sufficiently high, no user would prefer the indirect path, and selecting the direct path would be a NE for all users. Conversely, when there is no transmission loss over sidelink (s_1,s_2) (i.e., q=0), every user would prefer to use the source with fewer users. Therefore, the profile of all users selecting DP is a NE only if the user distribution between the two sources is as even as possible, with n_1≤ n_2+1. Based on Theorem <ref>, we give some interesting conclusions.
If a strategy profile with u_1=n_1,u_2=n_2 is optimal, then it is also a NE.
A strategy profile with u_1=0,u_2=0 is a NE, if and only if (a) n_1=n_2+1,q̅=1, or (b) n_1=n_2,n_1(1-q̅^2)≤q̅-qμ/ϕ.
Note that u_1≥ t_1(u_2)-1 and u_2≥ t_2(u_1)-1 cannot hold simultaneously when q>2/n, and u_1≥ t_1(n_2)-1 cannot hold when n_1q̅< qμ/ϕ+n_2+q̅.
When n_1q̅< qμ/ϕ+n_2+q̅ and q>2/n, the unique NE is that all users choose DP, i.e., u_1=n_1,u_2=n_2.
We end this section by proving the existence of NE.
For any game instance with two sources, there exists a NE with u_1>0 and u_2=n_2.
By Theorem <ref> (4), if n_1q̅≤ qμ/ϕ+n_2+q̅, then the strategy profile that all users choose DP (i.e., u_1=n_1,u_2=n_2) is a NE. Otherwise, n_1q̅> qμ/ϕ+n_2+q̅. Let m̃ be an integer in interval [qμ/ϕ+n_2+n_1q̅-q̅/2q̅,qμ/ϕ+n_2+n_1q̅+q̅/2q̅]=[t_1(n_2)-1,t_1(n_2)], which always admits at least one integer. Note that n_1>qμ/ϕ+n_2+n_1q̅+q̅/2q̅≥m̃>0. By Theorem <ref>, a strategy profile with u_1=m̃ and u_2=n_2 is a NE.
§ NUMERICAL EXPERIMENTS
Through numerical simulations, we explore the impact of traffic condition on network performance, i.e., the total traffic rate and PoA.
Recall that the traffic flow originating from each user is Poisson with rate ϕ,
the service rate of each direct link is μ, and the loss probability over each side link is q. Assume ϕ=1 for normalization.
We first present the simulation results in two-source networks. In Fig. <ref>, the PoA and the total traffics are plotted under different q, μ and n_1, showing a PoA of less than 1.08. Such a little gap between the optimal solution and the worst NE suggests that the gain of centralized-decision making over decentralized-decision making is trivial most of the time. As shown in Fig. <ref>, the total traffic decreases with the increase of q, i.e., the increased loss rate on sidelink. On the other hand, the PoA is first increasing from 1 at q=0, implying that the NE and optimal solution are the same with u_1 = n_2 + v_1. That is, we have the equal number of users on (s_1,d) and (s_2,d) in terms of both IP and DP. With the increase of q, the benefit of centralized-decision making is gradually unveiled. However, when p reaches a certain value, the PoA goes down to 1 quickly. An intuitive explanation is that, when q becomes larger than the loss probability on DP, no users will choose IP in NE. And this strategy is optimal as well.
In Fig. <ref>, the traffic rates grow with the increase of μ due to the increased probability of no congestion in (<ref>). This is, a high service rate help clear the collision and relieve congestion on both DP and IP. The PoA curve indicates that either in overloaded or less congested scenarios, there is little improvement of centralized-decision making. In Fig. <ref>, the increased number of users leads to an increase of traffic rate in spite of the rise in loss rate. What is more, PoA tends towards 1 for small and large n_1. As the strategies in opt and NE are much similar for users at source s_1, i.e., DP in less biased scenario and IP severely biased scenario.
In the multi-source network, while the optimal solution can be easily computed by Algorithm <ref>, it is difficult to find all NEs even given Theorem <ref>. Hence, we merely consider a small value of m and n (i.e., m=3). The service rate and traffic arrival rate are fixed as μ=1, ϕ=1. Results are given in Fig. <ref>, which shows similar results in Fig, <ref>. It is obvious that the growth of the total traffic slows down gradually, because given the service rate, an increase of n_1 aggravates the network congestion. Second, the increase of loss rate on sidelink, leads to the increase of loss rate on IP. As a result, more users choose DP instead, which in turn worsens the network congestion.
Figure <ref> plots the performances for a range of q. When q=0 and q=1, the PoA is exactly 1.
The PoA converges to 1 when q goes to 1, because when the service rate is large enough compared with arrival rate, there is a sufficiently small congestion loss and all users like to choose DP.
§ CONCLUSION
In this work, we give a theoretical analysis of a load balancing game in cloud-enabled networks, in which the users want to minimize the loss probability of their packets with suitable routing strategies. In the centralized analysis, an efficient algorithm for maximizing the total traffic rate is proposed, according to Lemma <ref> and Lemma <ref>. In the decentralized analysis, a characterization of Nash equilibrium is given, and the PoA is investigated. Numerical experiments show that the efficiency loss due to selfish behaviors is relatively small in most cases.
There are many future directions that are worth exploring. First, we only focus on pure strategies of players in this work, and an immediate and natural question is how the users act when mixed strategies are allowed. Second, it would be interesting to investigate heterogeneous servers (source nodes) where each s_i serves a different purpose or has a different service rate μ_i. Moreover, while we only consider direct path and one-hop indirect paths, a more general scenario where players can choose multi-hop indirect paths to the destination can be taken into consideration.
IEEEtran
|
http://arxiv.org/abs/2307.04334v1 | 20230710041019 | Quasicrystalline second-order topological semimetals | [
"Rui Chen",
"Bin Zhou",
"Dong-Hui Xu"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Department of Physics, Hubei University, Wuhan 430062, China
Department of Physics, Hubei University, Wuhan 430062, China
[][email protected]
Department of Physics, Chongqing University, Chongqing 400044, China
Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 400044, China
Three-dimensional higher-order topological semimetals in crystalline systems exhibit higher-order Fermi arcs on one-dimensional hinges, challenging the conventional bulk-boundary correspondence. However, the existence of higher-order Fermi arc states in aperiodic quasicrystalline systems remains uncertain. In this work, we present the emergence of three-dimensional quasicrystalline second-order topological semimetal phases by vertically stacking two-dimensional quasicrystalline second-order topological insulators. These quasicrystalline topological semimetal phases are protected by rotational symmetries forbidden in crystals, and are characterized by topological hinge Fermi arcs connecting fourfold degenerate Dirac-like points in the spectrum. Our findings reveal an intriguing class of higher-order topological phases in quasicrystalline systems, shedding light on their unique properties.
Quasicrystalline second-order topological semimetals
Dong-Hui Xu
August 12, 2023
====================================================
§ INTRODUCTION
Symmetry-protected topological phases of matter have emerged as a major new theme in modern condensed-matter physics in the past nearly two decades. While the discovery of topological insulators initially sparked interest in this field, recent focus has shifted towards exploring higher-order topological insulators <cit.>. Unlike traditional topological insulators, higher-order topological insulators exhibit unconventional bulk-boundary correspondence, allowing for the existence of gapless boundary excitations of higher co-dimensions. For example, a second-order topological insulator (SOTI) in two dimensions hosts robust gapless boundary modes localized at its zero-dimensional corners, dubbed corner modes <cit.>, while three-dimensional (3D) SOTIs support gapless boundary modes confined to their one-dimensional hinges <cit.>. In addition to higher-order topological insulators, higher-order topological semimetals have also been identified. These semimetals, including higher-order Dirac semimetals and higher-order Weyl semimetals, exhibit exotic hinge Fermi arcs that connect the projected nodes on the hinges, distinguishing them from conventional Dirac and Weyl semimetals <cit.>.
Initially, topological phases were observed in crystalline materials. However, more recently, researchers have extended these phases to aperiodic quasicrystalline systems, which lack discrete translational symmetry <cit.>. The absence of translational symmetry allows for the presence of rotational symmetries that are prohibited in crystals. This property enables the existence of new topological phases without crystalline counterparts, such as two-dimensional (2D) SOTIs protected by eightfold <cit.> and twelvefold <cit.> rotational symmetries. Moreover, a 3D time-reversal symmetry (TRS) breaking gapless topological phase hosting Weyl-like points has been proposed in a quasicrystal stack of Chern insulators <cit.>.
However, gapless phases with higher-order topology in quasicrystalline systems have yet to be discovered. This knowledge gap motivates us to explore the possibility of gapless quasicrystalline higher-order topological phases using a stacking approach with 2D quasicrystalline SOTIs. It has been demonstrated that stacking 2D topological materials provides a natural way of realizing 3D topological phases. This approach has been successful in achieving various topological phases, including Weyl semimetals <cit.>, axion insulators <cit.>, hinged quantum spin Hall insulators <cit.>, and high-Chern number quantum anomalous Hall insulators <cit.>.
In this work, we present the discovery of a quasicrystalline second-order topological semimetal (SOTSM) phase obtained by stacking 2D quasicrystalline SOTIs along the vertical direction (Fig. <ref>). The distinctive feature of the quasicrystalline SOTSM is the presence of rotation-symmetry-protected topological hinge Fermi arcs that terminate at fourfold degenerate Dirac-like points in the spectrum. The C_n^z-symmetric quasicrystalline SOTSM can support n topological hinge Fermi arcs (see the second column in Fig. <ref>), inheriting their topological nature from C_n^z-symmetric quasicrystalline SOTI hosting n corner modes (see the first column in Fig. <ref>). The number n can be four [Figs. <ref>(a) and <ref>(b)], as allowed in crystalline systems <cit.>, but it can also be eight [Figs. <ref>(c) and <ref>(d)] and twelve [Figs. <ref>(e) and <ref>(f)], which are typically forbidden in crystalline systems. Furthermore, we present the phase diagram of the stacked systems and identify a 3D quasicrystalline SOTI phase in addition to the quasicrystalline SOTSM phase. Finally, we show that the disclination-induced bound states can further reveal the topological nature of the quasicrystalline SOTSM phase.
This work is organized as follows. We first give a simple review of 2D quasicrystalline SOTI in Sec. <ref> and show a stack of it gives rise to the 3D quasicrystalline SOTSM phase with Dirac-like points in the spectrum in Sec. <ref>. A detailed discussion on Dirac-like points is presented in Sec. <ref>. Subsequently, we illustrate the phase diagram of the stacked quasicrystalline system in Sec. <ref> and investigate the disclination-induced bound state in Sec. <ref>. We summarize our conclusions and discuss possible experimental
schemes for the quasicrystalline SOTSM phase in Sec. <ref>.
§ REVIEW OF 2D QUASICRYSTALLINE SOTIS
2D quasicrystalline SOTIs had been proposed in
eightfold symmetric Ammann-Beenker-tiling (AB-tiling) quasicrystal <cit.> [Figs. <ref>(a) and <ref>(c)] and twelvefold symmetric Stampfli-tiling quasicrystal <cit.> [Fig. <ref>(e)]. The AB-tiling quasicrystal consists of two types of primitive tiles: square tiles (yellow) and rhombus tiles (green) with a small angle 45^∘. The Stampfli-tiling quasicrystal consists of three types of primitive tiles:
square tiles (yellow), regular triangle tiles (red), and rhombus tiles (green) with a small angle 30^∘.
In the tight-binding model, the lattice sites are placed on the vertices of each tile. The Hamiltonian of the 2D quasicrystalline SOTI contains two parts, H(M)=H_1st(M)+H_m <cit.>. The first part denotes a 2D first-order topological insulator protected by TRS
H_1st(M) = -∑_j≠ kZ(r_jk)/2[it_1( s _3τ _1cosϕ_jk+s _0τ _2sinϕ_jk)
+ t_2s _0τ_3] c_j^†c_k+∑_j(M+2t_2)s _0τ _3 c_j^†c_j,
where c^†_jα=(c^†_jα↑,c^†_jα↓) are electron creation operators at site j with the orbital α. t_1 and t_2 are hopping amplitudes, and M denotes the Dirac mass, together with t_2, determining the first-order topology. s_1,2,3 and τ_1,2,3 are the Pauli matrices acting on the spin and orbital spaces, respectively. s_0 is the 2× 2 identity matrix. ϕ_jk is the azimuthal angle of the bond between site j and k with respect to the horizontal direction. Z( r_jk) = e^1-r_jk/ξ is the spatial decay factor of hopping amplitudes with the decay length ξ.
The second part is a TRS breaking Wilson mass term, which is
H_m(η)=g∑_j≠ kZ(r_jk)/2cos( ηϕ_jk) s _1τ _1 c_j^†c_k,
where g and η describe the magnitude and varying period of the Wilson mass, respectively. H_m(η) are responsible for higher-order topology <cit.>. In the subsequent calculations, we fix the side length of the tiles as a=1 (white lines connecting the vertices in Fig. <ref>) and ξ=t_1=1.
For η=2,4,6, the Wilson mass gives rise to the SOTI phases in quasicrystals hosting four, eight, and twelve corner modes protected by the combined symmetry C_4^z U <cit.>, C_8^z U <cit.>, and C_12^z U <cit.>, respectively, where C_n^z is the n-fold rotational operation, and U could be the TRS operation T=i s_2τ_0 K or the mirror symmetry operation m_z=s_3 τ_0. K is the complex conjugation operator. The symmetry-protected eightfold and twelvefold corner modes, which are impossible in crystals <cit.>, are distinguishing characteristics of the 2D quasicrystalline SOTIs. Additionally, these corner modes are pinned to zero energy due to the existence of particle-hole symmetry.
The emergence of the zero-energy corner modes can be simply understood as follows <cit.>: g opens a gap in the first-order topological edge states and then induces Wilson mass kinks near the boundary. If one corner mode |ψ_c> appears at 𝐫_c, where the Wilson mass flips the sign, then the C_n^z U symmetry ensures that the number of corner modes is n. Because C_n^z U|ψ_c> is also the eigenstate of the system, which is localized at another corner by rotating 𝐫_c by an angle of 2π/n.
§ 3D QUASICRYSTALLINE SOTSMS
3D crystalline SOTSMs have been constructed by stacking 2D crystalline SOTIs along the vertical direction <cit.>. 3D quasicrystalline SOTSM phases can be achieved in a similar manner, i.e., by periodically staking 2D quasicrystalline SOTIs with an orbital-dependent hopping t_z s_0τ_3 on each site <cit.>. After Fourier transformation applied to the vertical direction z, the 3D stacked Hamiltonian can be expressed as
H_3D=∑_k_zH(M-2t_zcos k_z).
The conduction and valence bands of in this model have double degeneracy because of the presence of the combination of TRS and inversion symmetry PT <cit.>, where P=s_0τ_3 is the inversion symmetry operator. It is necessary to point out that when η=2, applying the stacked Hamiltonian to periodic cubic lattices will give birth to a 3D crystalline SOTSM <cit.> (see Appendix <ref>) with four hinge Fermi arcs connecting the projection of fourfold degenerate Dirac points that are well defined in the momentum space. Next, we investigate the situation where the Hamiltonian is defined on a stack of 2D quasicrystals.
§.§ η=2
We first consider a 3D quasicrystal [Fig. <ref>(b)] by stacking 2D AB-tiling quasicrystals with the square-shaped boundary [Fig. <ref>(a)] and set the varying period of Wilson mass η=2. Figure <ref>(a) shows the spectral function 𝒜 (E_F,k_z) of the 3D quasicrystalline system with open-boundary condition in the xy-plane. We can see that the bulk conduction and valence bands touch at two discrete points k_z=± k_z^1 where the energy gap is closed, indicating a semimetal phase. Importantly, fourfold degenerate zero-energy flat band boundary states emerge in the region |k_z|>k_z^1, describing hinge Fermi arc states in this semimetal phase. Figure <ref>(c) displays the probability density distribution of the zero-energy states at k_z=-2 [marked by the green star in Fig. <ref>(a)].
Figure <ref>(b) illustrates the spectral function of the quasicrystalline system with periodic boundary conditions along all the directions. The periodic boundary condition in the xy-plane is achieved by treating the system as a crystal with a supercell periodicity. Comparing to the spectral function under open boundary condition in Fig. <ref>(a), the zero-energy flat band boundary states disappear, further confirming that the zero-energy modes in between ± k_z^1 are hinge Fermi arc states. Moreover, the higher-order topology of the hinge Fermi arcs is revealed by the quantized quadrupole moment Q_xy=0.5 for |k_z|>k_z^1 [Fig. <ref>(d)]. Therefore, the system is identified as a quasicrystalline SOTSM.
The bulk spectral function versus k_z exhibits a linear dispersion near the gap closing points at ± k_z^1 [Fig. <ref>(b)]. Meanwhile, the density of states around the gap closing points is parabolic, as shown in the inset of Fig. <ref>(b), which identifies the well-known bulk signatures of Dirac points in crystalline systems <cit.>. These features suggest that the gapless points in the present system are Dirac points in quasicrystals. However, as discussed in Sec. <ref>, a more detailed analysis reveals that the situation is complex.
§.§ η=4 and η=6
Now, we come to the case of η=4 and η=6, which can give rise to 2D quasicrystalline SOTIs without crystalline counterpart <cit.>. Here, the 3D quasicrystalline systems are stacked by the AB-tiling octagonal quasicrystal [Figs. <ref>(c) and <ref>(d)] and the Stampfli-tiling dodecagonal quasicrystal [Figs. <ref>(e) and <ref>(f)], respectively. Figures <ref>(a) and <ref>(b) show the spectral function 𝒜 (E_F,k_z) of the two 3D quasicrystalline systems with open boundary condition in the xy-plane and periodic boundary condition along the vertical direction. The spectral functions look similar to that shown in Fig. <ref>(a), however, the degeneracy of zero-energy modes is different.
These zero-energy flat-band boundary modes in the region |k_z|>k_z^1 are hinge Fermi arc states traveling on the hinges of 3D octagonal/dodecagonal quasicrystals. This can be observed more clearly in Figs. <ref>(e) and <ref>(f), which show the energy spectra and the probability distributions of the zero-energy modes for fixed k_z marked by the green stars shown in Figs. <ref>(a) and <ref>(a), respectively. Apparently, the hinge Fermi arc states are inherited from the C_n^z-symmetric corner modes in quasicrystalline SOTIs, where n=8 in the AB-tiling octagonal quasicrystal and n=12 in the Stampfli-tiling dodecagonal quasicrystal.
To diagonalize the electronic structure of bulk state, we plot the spectral function under periodic boundary conditions along all the three directions in Figs. <ref>(c) and <ref>(d). As seen in the case with η=2, similar phenomena are observed, such as the disappearance of zero-energy hinge arcs, a linear dispersion along k_z, and the quadratic density of states around the gap closing points.
Therefore, our study demonstrates that stacking 2D quasicrystals can result in the emergence of an exotic topological phase of matter i.e., the quasicrystalline SOTSMs, which possesses eight and twelve hinge Fermi arcs protected by forbidden rotation symmetries in crystalline systems. Our findings highlight the potential for stacking 2D quasicrystals and expand our understanding of condensed matter physics.
§ DIRAC-LIKE POINTS
Upon initial inspection, the gap closing points near k_z=± k_z^1,2,3 shown in Figs. <ref>(b), <ref>(c) and <ref>(d) are reminiscent of the Dirac point characterized by the massless Dirac equation. They both exhibit a linear dispersion along k_z and a unique quadratic density of state near the gap closing points. However, a closer inspection of the spectrum reveals that the gap closing points in quasicrystalline SOTSMs are distinct from those in crystalline second-order topological Dirac semimetals (SODSMs).
Figure <ref>(a) shows the spectrum near the gap closing point k_z^1 in the SOTSM of η=2 under periodic boundary conditions along all the directions [see Fig. <ref>(b)]. There appear three band-crossing points, which is quite different from the crystalline SODSM phase that hosts only one band-crossing point [Fig. <ref>(e)]. Figures <ref>(b) and <ref>(e) show the wave function of the states marked by the red and green stars in Fig. <ref>(a), respectively. One of the band-crossing is dominated by the local patch containing three square tiles and two rhombus tiles [Figs. <ref>(b) and <ref>(c)], and the other band-crossing is dominated by the local patch containing six rhombus tiles [Figs. <ref>(d) and <ref>(e)]. The appearance of multiple band-crossing points is because gap closes at different k_z for distinct kinds of local patches. This phenomenon is attributed to the absence of discrete translational symmetry in quasicrystalline systems.
For the AB-tiling octagonal quasicrystal with η=4, the spectrum opens a tiny energy gap [Fig. <ref>(f)]. The size of the energy gap decays with the increase of size [Fig. <ref>(g)]. For the Stampfli-tiling quasicrystal with η=6, the spectrum is similar to the case with η=2, except that there appear more band crossings. This is because there are more different patterns of local patches in Stampfli-tiling quasicrystal.
Though the gap-closing points in quasicrystalline SOTSMs manifest several similarities compared to the Dirac points in crystalline SODSMs. However, we found the fine structure of the gap-closing points due to the absence of translational symmetry by further checking the spectrum. Therefore, we dub these gap closing points in the quasicrystalline SOTSM phase as Dirac-like points.
§ PHASE DIAGRAM
We present the topological phase diagram of the stacked quasicrystal system in this section.
Figures <ref>(a)-<ref>(b) show ln E_g and Q_xy as functions of the momentum k_z and the parameter M for the AB-tiling quasicrystalline square system with η=2. E_g is the value of the energy gap obtained under periodic boundary conditions along all the three directions. Each point along the white line corresponds to the gap-closing point shown in Fig. <ref>(b). For about -5.7<M<0.3, the existence of the gap closure with the accompanying topological phase transition between Q_xy=0 and Q_xy=0.5 indicates that the system corresponds to the SOTSM phase.
For about M>0.3, the system corresponds to a 3D quasicrystalline SOTI phase with a topological gap characterized by a quantized quadrupole moment Q_xy=0.5 for any k_z. For about M<-5.7, the system is a normal insulator (NI) with a topologically trivial gap.
Above we only consider the case of η=2 in the AB-tiling quasicrystal. In the cases of the AB-tiling octagonal quasicrystal with η=4 and the Stampfli-tiling dodecagonal quasicrystal with η=6, we find similar results by adjusting the parameter M, i.e., the systems also support the quasicrystalline SOTSM phase, the 3D quasicrystalline SOTI phase, and the NI phase.
§ DISCLINATION-INDUCED BOUND STATES
Disclination-induced bound states provide a potential probe of crystalline topology, which has been widely investigated in different topological systems <cit.>. Recently, disclination-induced bound states have been observed in topological crystalline insulators <cit.>, acoustic topological insulators <cit.>, and acoustic Weyl semimetals <cit.>. In this section, we study the disclination-induced bound states in the quasicrystalline SOTSM phase.
The disclination is introduced by cutting out a specific segment [the first column in Fig. <ref>] and then glue the lattice back together [the second column in Fig. <ref>]. The two sides of the cut are glued together by identifying sites on the two sides of the cut related by rotational symmetry, which is called a Volterra process <cit.>. The defects break the rotational symmetry locally at the center of lattice, but the rest preserves the rotational symmetry and is indistinguishable from the bulk of the original system without the cut.
The corresponding spectral function of sample geometries in Figs. <ref>(a)-<ref>(b), Figs. <ref>(c)-<ref>(d), and Figs. <ref>(e)-<ref>(f) are similar to Fig. <ref>(a), Fig. <ref>(a), and Fig. <ref>(b), respectively, except that the spatial probability distributions are different for the zero-energy modes. The colored points in Fig. <ref> display the probability distributions of the zero-energy modes in these systems with k_z=-2.
For the three different disclination systems in Figs. <ref>(b), <ref>(d), and <ref>(f), they both host one zero-energy mode at the disclination core, and three, seven, and eleven zero-energy modes at the hinges of the systems, respectively. Moreover, similar to the zero-energy hinge modes, the disclination modes only appear for |k_z|>k_z^1/2/3, and disappear in the regions of |k_z|<k_z^1/2/3. This further reveals that the disclination-induced bound states and the hinge Fermi arc states are the consequence of nontrivial bulk topology, which cannot be removed without topologically trivializing the bulk of systems <cit.>. Moreover, the k_z-dependent disclination-induced bound states provide an experimental probe for the quasicrystalline SOTSM phase.
§ CONCLUSION AND DISCUSSION
In conclusion, this study has demonstrated that a stack of 2D quasicrystalline SOTIs can give rise to 3D quasicrystalline SOTSM phases. These 3D phases exhibit rotation-symmetry protected hinge Fermi arcs, which are forbidden in crystalline systems. Additionally, our calculations have shown that the stacked systems also support the 3D quasicrystalline SOTI phase, as evidenced by the phase diagram. We have proposed that the dependence of k_z on disclination-induced bound states can serve as an experimental probe for the quasicrystalline SOTSM phase.
While the quasicrystalline SOTSM shares similarities with the crystalline SODSM <cit.>, there are three main distinctions between them. Firstly, the number of C_n^z-symmetry protected hinge Fermi arcs in the quasicrystalline SOTSM is not limited to four, as observed in crystalline SODSM, but can be eight or twelve as well. Secondly, in the quasicrystalline SOTSM, the lack of translational symmetry renders the in-plane momentum ineffective as a quantum number, making it impossible to define Dirac points in momentum space, unlike in crystalline SODSM where the Dirac equation applies. Lastly, the spectrum of the quasicrystalline SOTSM exhibits a higher number of band-crossing points compared to the crystalline SODSM, a consequence of the absence of in-plane translational symmetry in the stacked quasicrystals.
Moreover, recent experiments investigating the stack of Ta_1.6Te quasicrystal layers <cit.>, along with first-principles calculations and symmetry analysis, have revealed a symmetry-protected semimetal phase and explored the topological properties of the material. This suggests that the quasicrystalline SOTSM phase can be experimentally realized in real materials. Furthermore, considering the successful experimental realization of the 2D quasicrystalline SOTI phase in electrical circuit systems <cit.>, we believe that the quasicrystalline SOTSM holds promise in metamaterials. These unique features and possibilities offer exciting prospects for the future implementation of our proposal.
D.-H.X. was supported by the NSFC (under Grant Nos. 12074108 and 12147102), the Natural Science Foundation of Chongqing (Grant No. CSTB2022NSCQ-MSX0568). R.C. acknowledges the support of the Chutian Scholars Program in Hubei Province. B.Z. was supported by the NSFC (under Grant No. 12074107), the program of outstanding young and middle-aged scientific and technological innovation team of colleges and universities in Hubei Province (under Grant No. T2020001) and the innovation group project of the natural science foundation of Hubei Province of China (under Grant No. 2022CFA012).
§ CRYSTALLINE SODSM
To make a comparative study, we investigate the 3D crystalline SODSM phase [Fig. <ref>(b)], modeled by staking 2D crystalline SOTIs along the vertical direction [Fig. <ref>(a)]. Figures <ref>(c) and <ref>(d) show the spectral function of the crystalline system with open and periodic boundary conditions in xy-plane, respectively. Hinge Fermi arcs appear and connect the band-closing points at k_z=± k_z^4. The results are similar to those in Figs. <ref>(a)-<ref>(b). Figure <ref>(e) shows the spectrum near the band-closing point -k_c^4. Only one band-crossing point is observed because of the existence of transitional symmetry in crystalline systems. This is observed more clearly in Fig. <ref>(f). The
probability distribution of the state labeled by green star [Fig. <ref>(e)] is uniformly distributed and all the local patches undergo the topological phase transition simultaneously when k_z varies.
Moreover, we find that the low-energy effective Hamiltonian can be described by the massless Dirac equation. Therefore, the system is identified as the crystalline SODSM phase.
apsrev4-1-etal-title_6authors
|
http://arxiv.org/abs/2307.06269v1 | 20230712161409 | Doubly robust machine learning for an instrumental variable study of surgical care for cholecystitis | [
"Kenta Takatsu",
"Alexander W. Levis",
"Edward Kennedy",
"Rachel Kelz",
"Luke Keele"
] | stat.AP | [
"stat.AP",
"stat.ME"
] |
0
The authors declare no conflicts. Research in this article was supported by the Patient-Centered Outcomes Research Institute (PCORI Awards ME-2021C1-22355) and the National Library of Medicine, #1R01LM013361-01A1. All statements in this report, including its findings and conclusions, are solely those of the authors and do not necessarily represent the views of PCORI or its Methodology Committee. The dataset used for this study was purchased with a grant from the Society of American Gastrointestinal and Endoscopic Surgeons. Although the AMA Physician Masterfile data is the source of the raw physician data, the tables and tabulations were prepared by the authors and do not reflect the work of the AMA. The Pennsylvania Health Cost Containment Council (PHC4) is an independent state agency responsible for addressing the problems of escalating health costs, ensuring the quality of health care, and increasing access to health care for all citizens. While PHC4 has provided data for this study, PHC4 specifically disclaims responsibility for any analyses, interpretations or conclusions. Some of the data used to produce this publication was purchased from or provided by the New York State Department of Health (NYSDOH) Statewide Planning and Research Cooperative System (SPARCS). However, the conclusions derived, and views expressed herein are those of the author(s) and do not reflect the conclusions or views of NYSDOH. NYSDOH, its employees, officers, and agents make no representation, warranty or guarantee as to the accuracy, completeness, currency, or suitability of the information provided here. This publication was derived, in part, from a limited data set supplied by the Florida Agency for Health Care Administration (AHCA) which specifically disclaims responsibility for any analysis, interpretations, or conclusions that may be created as a result of the limited data set.
Kenta TakatsuCarnegie Mellon University, Email: [email protected]
Alexander W. LevisPostdoctoral Researcher, Carnegie Mellon University, Email: [email protected]
Edward KennedyAssociate Professor, Carnegie Mellon University, Email: [email protected]
Rachel KelzUniversity of Pennsylvania and Leonard David Institute, Email: [email protected]
Luke KeeleAssociate Professor, University of Pennsylvania, Email:
[email protected], corresponding author
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
1
Kenta TakatsuCarnegie Mellon University, Email: [email protected]
Alexander W. LevisPostdoctoral Researcher, Carnegie Mellon University, Email: [email protected]
Edward KennedyAssociate Professor, Carnegie Mellon University, Email: [email protected]
Rachel KelzUniversity of Pennsylvania and Leonard David Institute, Email: [email protected]
Luke KeeleAssociate Professor, University of Pennsylvania, Email:
[email protected], corresponding author
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Comparative effectiveness research frequently employs the instrumental variable design since randomized trials can be infeasible for many reasons. In this study, we investigate and compare treatments for emergency cholecystitis — inflammation of the gallbladder. A standard treatment for cholecystitis is surgical removal of the gallbladder, while alternative non-surgical treatments include managed care and pharmaceutical options. As randomized trials are judged to violate the principle of equipoise, we consider an instrument for operative care: the surgeon's tendency to operate. Standard instrumental variable estimation methods, however, often rely on parametric models that are prone to bias from model misspecification. We outline instrumental variable estimation methods based on the doubly robust machine learning framework. These methods enable us to employ various machine learning techniques for nuisance parameter estimation and deliver consistent estimates and fast rates of convergence for valid inference. We use these methods to estimate the primary target causal estimand in an IV design. Additionally, we expand these methods to develop estimators for heterogeneous causal effects, profiling principal strata, and a sensitivity analyses for a key instrumental variable assumption. We conduct a simulation study to demonstrate scenarios where more flexible estimation methods outperform standard methods. Our findings indicate that operative care is generally more effective for cholecystitis patients, although the benefits of surgery can be less pronounced for key patient subgroups.
Keywords: instrumental variables, doubly robust, nonparametric statistics, influence functions
empty
§ INTRODUCTION
§.§ Emergency Treatment for Cholecystitis
Emergency general surgery (EGS) refers to medical emergencies where the injury is endogenous (e.g., a burst appendix) while trauma care refers to injuries that are exogenous (e.g., a gunshot wound). Recent research has focused on the comparative effectiveness of operative versus non-operative care for various EGS conditions <cit.>. One critical finding from this research is that the effectiveness of operative care tends to vary with EGS condition, highlighting the need for additional research for specific EGS conditions.
One of the most common EGS conditions is cholecystitis, an inflammation of the gallbladder. Cholecystitis is often caused by the formation of gallstones — small stones made from cholesterol, bile pigment and calcium salts — blocking the tube leading out of the gallbladder, but it may also result from bile duct problems, tumors, serious illness and other types of infections. In severe cases, cholecystitis is considered a medical emergency where the standard treatment is the removal of the gallbladder through surgery, a procedure called cholecystectomy. Nevertheless, non-operative alternatives including observation, minimally invasive procedures, medication, and supportive care are also available. Given the potential risks associated with surgical interventions for some patients, it is critical to investigate the relative efficacy of surgical interventions compared to non-operative alternatives. Moreover, given the heterogeneity across EGS conditions, it is also necessary to produce evidence focused on specific conditions such as cholecystitis.
This article aims to assess the efficacy of surgery for cholecystitis. One of the main challenges in addressing the effectiveness of surgery is the lack of randomized controlled trials as they are judged to violate the principal of equipoise in this context. Consequently, researchers must rely on observational studies to collect evidence. While observational studies can provide a large sample size, they are subject to confounding bias by indication. This problem arises when the selection of patients to treatment depends on prognostic factors that indicate potential benefits. It is thus essential to adjust for confounding bias to ensure accurate statistical conclusions. Many statistical procedures for observational studies assume all confounding variables, including prognostic factors, are recorded. However, medical claims data may not always contain complete records of all relevant prognostic factors, for instance, physiological measures of patient frailty <cit.>. To address these challenges, the literature on EGS has adopted instrumental variable (IV) methods, which can account for unobserved confounding variables, allowing for a more accurate assessment of the effectiveness of surgery for cholecystitis. In our study, adopting an IV design, we focus on new flexible estimation methods.
The current literature on EGS has primarily focused on estimating average treatment effects, which can mask possible patient-to-patient variation <cit.>. For instance, the average effect of operative care could obscure the fact that some patients respond dramatically well to treatment while others may suffer adverse reactions. One strategy for understanding heterogenous treatment effects is to focus on conditional average treatment effects (CATEs), which describe how treatment effects vary with measured features. CATEs can help design treatment strategies to target patients who are likely to benefit from existing interventions. Despite recent progress in statistical methodologies for flexible CATE estimation <cit.>, current EGS research often relies on single, binary effect modifiers using extant methods <cit.>. Therefore in our study, we also focus on developing methods that allow for more complex combinations of possible effect modifiers. In the following section, we outline the IV design that we employ to study the comparative effectiveness of treatments for cholecystitis.
§.§ An IV Design for Cholecystitis
IV methods refer to a set of study designs and statistical techniques that can facilitate the identification of causal effects in the presence of unobserved confounders. An IV is a variable that is associated with the treatment of interest but affects outcomes only indirectly through its impact on treatment assignment <cit.>. In the context of our study, an instrument must be associated with treatment via surgical care, but without directly affecting patient outcomes. To be considered a valid instrument, a variable must meet three conditions: (1) it must be associated with the treatment; (2) it must be randomly or as-if randomly assigned; and (3) it cannot have a direct effect on the outcome <cit.>. When these conditions are met, along with certain structural assumptions about the data-generating process (which we will elaborate on shortly), an IV can provide a consistent estimate of a causal effect even in the presence of unobserved confounding between the treatment and the outcome. See <cit.> and <cit.> for reviews.
Comparative effectiveness research often uses a physician's preference for a specific course of treatment as an IV <cit.>. This is based on the assumption that, for a patient with a given level of disease severity, physicians may have different preferences over a course of treatment for idiosyncratic reasons. In other words, the assignment to each physician serves as a “nudge" toward each mode of care, assuming that the the physician's preference has no direct effect on patient outcomes. <cit.> proposed this type of IV in the context of drug prescriptions. In our study, we use a surgeon's tendency to operate (TTO) as an instrument to determine whether a patient receives surgery after admission to the emergency department. <cit.> first proposed using TTO as an IV and measured it by calculating the percentage of times a surgeon operated when presented with an EGS condition. The assignment of surgeons to patients is plausibly as-if randomized, since this study focuses on patients who are receiving emergency care and are unlikely to be able to select their physician <cit.>.
The dataset used in this study merges the American Medical Association (AMA) Physician Masterfile with all-payer hospital discharge claims from New York, Florida and Pennsylvania in 2012-2013. The study population includes all patients admitted for emergency or urgent inpatient care. The data includes patient sociodemographic and clinical characteristics, such as indicators for frailty, severe sepsis or septic shock, and 31 comorbidities based on Elixhauser indices <cit.>. Each patient is linked with his or her surgeon through a unique identifier. The primary outcome is the presence of an adverse event after surgery, i.e., death or a prolonged length of stay in the hospital (PLOS). We excluded surgeons from our study who did not perform at least five operations for one of the 51 specific EGS conditions per year within the two-year study timeframe. To measure the instrument (TTO), we follow the strategy proposed by <cit.>. For each surgeon, we randomly split the patient population into five subsets. Using one of the sub-splits, we calculate the proportion of times a surgeon operated for the population of patients for a set of EGS conditions. This measure is used as the instrument for the other 80% of the patient population.
§.§ Our Contributions
The majority of IV analyses use a method known as two-stage least squares (TSLS) for estimation <cit.>. TSLS is based on a set of two linear models that may be prone to bias from model misspecification. Notably, the validity of TSLS relies on correct specification of two parametric linear models related to the conditional mean of the outcome. Consequently, substantial bias can occur if there are nonlinear effects on the outcome not accounted for in the statistical model or interactions between control variables that are omitted.
To reduce bias from model misspecification, nonparametric methods — often referred to as machine learning (ML) — can be used to flexibly estimate parts of the data-generating distribution. However, it is generally infeasible to directly apply ML methods to causal inference, since the resulting estimators may be biased and yield confidence intervals with poor coverage <cit.>. Specifically, treatment effect estimates based on ML methods may inherit first-order smoothing bias from the ML fits <cit.>. In addition, ML methods often exhibit slow convergence rates, which results in inefficiency and invalid statistical inference <cit.>. Recent research has demonstrated that nonparametric methods can be applied to causal inference with the help of semiparametric theory <cit.>. This set of tools is often referred to as the doubly robust machine learning (DRML) framework <cit.>.
Following the framework in <cit.>, we outline DRML methods for estimation of the local average treatment effect from IV designs in which the so-called monotonicity assumption — a commonly invoked assumption for the identification of IV effects (see Section <ref> for details) — holds. We also summarize the use of sample splitting to ensure parametric convergence rates and valid inference for the target parameter. This provides investigators with an analytic pipeline for using ML methods for IV estimation to alleviate bias from model misspecification. In addition, we develop new, flexible methods for the estimation of related conditional effects within IV designs. More specifically, we extend a meta-learner framework <cit.> to conditional local average treatment effect estimation.
Additionally, we introduce new DRML methods related to the monotonicity assumption. First, we develop a method for profiling the principal strata that are implied by the monotonicity assumption. Next, we outline a new method of sensitivity analysis to probe the plausibility of the monotonicity assumption. Importantly, our sensitivity analysis framework allows investigators to judge whether study conclusions are robust to possible violations of the monotonicity assumption.
§.§ Related Work
There has been a great deal of interest in applying doubly robust or machine learning-based methods in IV designs, but little that has incorporated both sets of ideas. Here, we briefly review related work to better place our contributions in the context of the literature. First, some existing work has focused on applying semiparametric theory to develop robust estimators of causal effects in IV settings. <cit.> proposed doubly robust estimators of marginal and conditional local average treatment effects under an IV design with monotonicity. Their approach presupposes a parametric model for the conditional effects, and in addition, they focus on parametric models for instrument probabilities and other nuisance functions. By contrast, we pursue a nonparametric approach where all necessary components of the observed data distribution can be flexibly estimated, e.g., with machine learning algorithms. <cit.> proposed semiparametric estimators of parameters corresponding to least squares projections of local conditional effects. <cit.> proposed doubly robust estimators of parameters of marginal structural models and structural nested mean models, employing parametric models for the component nuisance functions. Finally, <cit.> derived necessary and sufficient conditions for identification of the average treatment effect on the treated in the presence of an IV, and proposed doubly robust estimators of this target estimand, again under parametric models for the nuisance functions involved.
Second, there have been many applications of flexible models or machine learning algorithms for causal effect estimation in IV designs, without formal nonparametric efficiency guarantees for estimation of target parameters. In particular, investigators have employed regression trees and bayesian causal forests <cit.>, kernel-based approaches <cit.>, and deep neural networks <cit.> to estimate causal quantities in IV settings. See <cit.> for a review of this area of research.
Lastly, there has been some work to combine ideas from semiparametric theory and machine learning to develop efficient and robust estimators of conditional and marginal causal effects on the basis of instrumental variables. This work closely aligns with <cit.>, particularly concerning the estimation and inference of marginal effects in IV settings. Their work outlines the DRML procedure, which directly informs our methodology. <cit.> employed DRML methods for estimation of certain dynamic treatment effect estimands in an IV design. <cit.> considered a generalization of monotonicity in continuous IV settings, and proposed DRML-based estimates of the local instrumental variable curve. <cit.> developed DRML-based estimators of projection-based conditional local effects. That is, asserting a “working” parametric model for these conditional effects (e.g., thinking of these as closest parametric approximations to true conditional effects), they developed a targeted maximum likelihood estimator (TMLE) of the finite set of parameters in the working model. <cit.> proposed a DRML-based estimator of the local average treatment effect for a time-to-event outcome under right censoring. Finally, similar to one of our proposals, <cit.> developed a doubly robust meta-learner of local effects conditional on all covariates.
As such, the existing research on DRML methods for the IV framework is generally focused on more specific applications, and hasn't yet outlined a complete set of DRML methods for more standard IV applications. In addition, we propose a doubly robust meta-learner that differs in that it involves two pseudo-outcomes and second-stage regressions, and we also allow for more coarsened analysis by developing estimators of effects conditional on subsets of covariates. The rest of the article is organized as follows. In Section <ref>, we review notation, assumptions, and the target marginal and conditional causal estimands in our study using IV methods. In Section <ref>, we develop DRML estimation methods for both marginal and conditional treatment effects. In Section <ref>, we review our proposed methods for profiling and sensitivity analysis. Next, we perform a simulation study to demonstrate how our proposed methods improve over extant methods in Section <ref>. Finally, we present empirical results in Section <ref> and concluding remarks in Section <ref>.
§ THE IV DESIGN
§.§ Notation and Setup
In this section, we introduce the notation to describe our statistical setting. We define each observational data unit as O:=(Y, A, Z, X) where Y represents the outcome of interest, A is a binary treatment, Z is a binary instrument and X = (X_1,…,X_k) is a set of baseline covariates. Specifically, in our study, Y is a binary indicator for an adverse event. Treatment A=1 indicates that a patient received operative care after their emergency admission, while A = 0 indicates that a patient received alternative non-operative care. The instrument Z is an indicator that takes the value of 1 when the continuous measure of a surgeon's tendency to operate (TTO) is above the sample median, and 0 otherwise. That is, an IV value of 1 indicates that a patient has been assigned to a high TTO surgeon and thus are more likely to receive operative care. We observe an independent and identically distributed sample (O_1, ..., O_n) from an unknown data-generating distribution P_0, and we denote by ℙ_n its corresponding empirical distribution. We index objects by P when they depend on a generic distribution, and we use subscript 0 as short-hand for P_0. For instance, we denote the expectation under P_0 by _0. For a distribution P and any P-integrable function η, we define Pη = ∫η dP. For instance, ℙ_n η represents n^-1∑_i=1^n η(O_i). Occasionally, we slightly abuse the notation and treat the random variable as an identity mapping to itself, such that, ℙ_n Y = n^-1∑_i=1^n Y_i.
We use the Neyman-Rubin potential outcome framework to formalize assumptions for the identification of the target causal parameters <cit.>. We use Y(a) to denote the potential outcome that would have been observed had treatment been set to value A=a. When the treatment-outcome relationship is confounded (e.g., when A is not randomized), estimating the effect of A on Y is challenging, relying on measurement of a sufficient set of confounders. However, if a valid instrumemt Z is available, a causal effect of A on Y can be estimated consistently even in the presence of unobserved confounders. With an instrument, we require two additional potential outcomes: Y(z,a) the potential outcome that would have been observed had the IV been set to Z=z and treatment been set to A=a, and A(z) the potential treatment status if the IV had been set to Z=z.
§.§ Core IV Assumptions
Next, we formalize the three core IV assumptions using the potential outcome framework, and discuss their plausibility in the context of our application. First, our notation implicitly assumes the stable unit treatment value assumption (SUTVA) <cit.>: both the potential treatments received A(z) and potential outcomes Y(z,a), for z, a=0,1, depend solely on the value z of the instrument, and a of the treatment for Y(z,a), for each individual. That is, there is only one version of the instrument and treatment, and A_i(z) and Y_i(z,a) are not affected by the value of (Z_i', A_i') for i'≠ i. These two components of SUTVA are often referred to as the consistency and no-interference assumptions, respectively.
As discussed earlier, a valid instrument Z must satisfy the following three conditions: (1) Z affects or is associated with A, (2) is as good as randomized, possibly after conditioning on measured covariates X, and (3) affects the outcome Y only indirectly through A <cit.>. The three core IV assumptions can be stated as follows:
* Relevance: Z is associated with exposure A, or equivalently, P_0(A(1) = A(0)) ≠ 1.
* Effective random assignment: For all z,a ∈{0,1}, Z (A(z), Y(z)) | X. This is sometimes called the “unconfoundedness” assumption.
* Exclusion restriction: Y(z,a) = Y(z',a) for all z,z',a ∈{0,1}. This implies that Z only affects Y through its influence on A i.e., Y(z, a) ≡ Y(a), for all z,a ∈{0,1}.
In our study, assumption <ref> implies that TTO affects the likelihood of a patient receiving surgery, which has been empirically validated in previous work <cit.>. Although neither <ref> nor <ref> can be tested, assumption <ref> is plausible as our data is based on emergency admissions, and patients may have little choice in selecting surgeons with a high or low TTO. We also adjust for baseline covariates in our analysis to further bolster this assumption. Statistical adjustment for baseline covariates is key motivation for developing more flexible methods of estimation. Assumption <ref> implies that any effect of a surgeon's TTO on the outcome must only be a consequence of the medical effects of surgery. This can be violated in our study if treatment from a surgeon with a high TTO includes other aspects of medical care that can affect the outcome. This type of violation is relatively unlikely since nursing care in the emergency department is the same regardless of surgeon. To bolster the plausibility of this assumption, we compare patients within the same hospital that were assigned to surgeons with different levels of TTO, which holds other systemic factors of care constant <cit.>. <cit.> conduct a falsification test for the exclusion restriction based on TTO, and find no evidence against this assumption.
§.§ Monotonicity and Local Average Treatment Effect
One common statistical estimand of interest in an IV analysis is the following quantity:
χ_0 := 𝔼_0[𝔼_0(Y | Z=1, X)-𝔼_0(Y | Z=0, X)]/𝔼_0[𝔼_0(A | Z=1, X)-𝔼_0(A | Z=0, X)].
The seminal work by <cit.> shows that this estimand identifies a specific subgroup causal effect. For this identification result to hold, we must introduce an additional assumption called monotonicity. Typically, the monotonicity assumption is interpreted by stratifying the study observations into four so-called principal strata <cit.>. That is, when both the instrument and treatment are binary, we can stratify observations into four groups or principal strata: compliers (i.e., A(1) > A(0)), always-takers (i.e., A(1) = A(0) = 1), never-takers (i.e., A(1) = A(0)=0) and defiers (i.e., A(1) < A(0)) <cit.>. In our application, always-takers will receive surgery regardless of the surgeons' TTO, while never-takers never receive surgery regardless of a surgeons' TTO. Compliers, however, receive surgery because they are assigned to a high TTO surgeon. Defiers always act contrary to the assignment of the IV. Monotonicity is then defined as follows:
* Monotonicity: P_0(A(1) < A(0)) = 0.
In other words, the monotonicity rules out the existence of defiers under the data-generating distribution P_0. The fundamental result from <cit.> states that, under suitable assumptions, the IV estimand given by (<ref>) equals a causal effect among compliers, or local average treatment effect (LATE).
Assuming SUTVA, <ref>–<ref>, and 0 < P_0(Z = 1 | X) < 1 almost surely, then
χ_0 = _0[Y(1) - Y(0) | A(1) > A(0) ].
In our application, the LATE represents the effect of surgery among those patients who receive surgery because they receive care from a high TTO surgeon. It is critical to note that the LATE may not coincide with a more common target causal estimand, such as the average treatment effect (ATE). The ATE, represented by 𝔼_0[Y(1) - Y(0)], measures the average difference in outcomes when all individuals in the study population are assigned to the surgery group versus when all individuals are assigned to the non-operative management group. While the ATE provides a summary of treatment effects over the entire population, the LATE only represents the effect among compliers.
The monotonicity assumption may be controversial in the context of preference-based instruments like TTO <cit.>. Monotonicity violations may arise when the instrument is not delivered uniformly to all subjects. In our context, the monotonicity assumption would not hold if there patients that are treated contrary to a physician's preferences for surgery. Indeed, a lack of such contrary cases may seem unlikely since a surgeon's preferences weigh a variety of risks and benefits. As an example, assume we have two surgeons who work in the same hospital. Surgeon 1 generally prefers to operate but makes exceptions for cholecystitis patients (because of some known contraindications). Surgeon 2 generally tries to avoid surgical treatments but makes exceptions for patients who are very healthy and can withstand the invasive nature of many procedures. Thus any patient that has cholecystitis but is also very healthy would be treated in a way contrary to each surgeons' general preferences for operative care and is a defier. Under this scenario, monotonicity does not hold. Given this concern, we propose and develop a sensitivity analysis to probe the plausibility of the monotonicity assumption.
§.§ The Conditional Local Average Treatment Effect
In the literature using IV methods for EGS conditions, the primary focus has been on the LATE. While the LATE is a useful summary of the overall effect among compliers, it tells us little about possible variation of the treatment effect across patients. To explore the possibility of heterogeneous treatment effects, we can estimate covariate-specific LATEs or conditional LATEs (CLATEs). In our application, the CLATE corresponds to the causal effect of surgery on patient outcomes for subgroups of complying patients defined by their baseline covariates, such as age and comorbidities. By estimating CLATEs, we can investigate how the causal effect of surgery varies for different subgroups of patients, which can inform clinical decision-making and help identify which patients are either most likely to benefit from the surgical intervention or be at risk of an adverse event. This information is especially important in cases where the overall average effect of the treatment may be misleading due to heterogeneity in treatment effects across different patient subgroups.
To identify the CLATE, we invoke the identification conditions outlined in <cit.> and <cit.>. First, we define the covariate set V, which is a subset of X. <cit.> first studied the special case V = X. The covariates in V are the set of covariates identified as effect modifiers. We then define the conditional relevance assumption, which is a conditional analog to <ref>:
* Conditional relevance: Z is associated with exposure A conditioning on V=v_0, or equivalently, P_0(A(1) = A(0) | V=v_0)≠ 1.
Replacing <ref> with <ref> leads to the
following identification result, as shown in <cit.>:
Assuming SUTVA, <ref>, <ref>, <ref>, <ref>, and 0 < P_0(Z = 1 | X) < 1 almost surely, then
χ_0(v_0) := 𝔼_0[𝔼_0(Y | Z=1, X)-𝔼_0(Y | Z=0, X)| V=v_0]/𝔼_0[𝔼_0(A | Z=1, X)-𝔼_0(A | Z=0, X)| V=v_0]
= _0[Y(1) - Y(0) | A(1) > A(0), V=v_0].
The statistical functional of interest, χ_0(v_0), is now the ratio of two regression functions. While the standard LATE is a scalar quantity, the CLATE is a function of V, making estimation and inference significantly more complex. Below, we outline how to use a meta-learner framework for flexible estimation of the CLATE. Recent research has developed meta-learner methods for conditional treatment effects that incorporate semiparametric theory <cit.>. In Section <ref>, we adapt the “DR-Learner” from <cit.> to the context of CLATE estimation.
§ DRML ESTIMATION AND INFERENCE FOR IV DESIGNS
The DRML framework combines semiparametric theory, machine learning methods, and sample splitting to derive flexible yet efficient nonparametric estimators of causal parameters <cit.>. A crucial feature of the DRML framework is the use of influence functions — a central object of semiparametric theory — to construct root-n consistent estimators under relatively weak conditions. In particular, the framework provides general heuristics to derive estimators that converge in distribution to a mean-zero normal distribution with the smallest asymptotic variance among a large class of estimators.
First, we briefly review how to estimate the average treatment effect (ATE) using the DRML framework. See <cit.> for a more detailed treatment. Under causal positivity, consistency, and no unmeasured confounding, the ATE can be written as the contrast of two averaged regression functions: Γ_0 := _0[_0(Y | A = 1, X)]-_0[_0(Y | A = 0, X)], where Y is the outcome, A is binary treatment, and X are measured confounders. Estimating _0(Y | A = a, X) for a ∈{0,1} is a standard regression problem and flexible ML methods may be preferred to avoid model misspecification. However, if Γ_0 is estimated as the average of predictions from an ML model, the estimator typically inherits first order bias and converges slower than the so-called parametric rate of n^-1/2. Moreover, depending on the ML method, statistical inferences may not be valid.
Using semiparametric theory, it is possible to construct an estimator Γ of Γ_0 on the basis of some mean-zero function Γ̇_0^*, such that
n^1/2(Γ- Γ_0) d⟶ N(0, P_0Γ̇_0^*2)
where P_0Γ̇_0^*2 is the asymptotic variance of Γ, scaled by sample size n. The efficient influence function, which is the optimal Γ̇_0^* in terms of asymptotic efficiency in a nonparametric model, is based on two regression models: the outcome model _0(Y | A, X) and the propensity model P_0(A = 1 | X). The outcome model can be estimated by regressing the outcome against the treatment and all confounders; while the propensity model can be estimated by regressing the treatment against all confounders. The final estimate of the ATE is obtained by combining estimates from these two models. This approach is often called “doubly robust” as it is consistent if either the outcome model or the propensity model is correctly specified <cit.>, and it typically exhibits a faster rate of convergence due to second-order product bias compared to using a single model.
To avoid complicated statistical dependencies from using the same data for estimation and selection of tuning parameters, sample-splitting or cross-fitting is then applied in the estimation process <cit.>. Overall, the DRML framework enables the estimation of treatment effects using ML methods while potentially retaining fast parametric-type n^-1/2 inferential rates.
§.§ DRML for LATE
In this section, we outline the DRML framework for the estimation of LATEs under weak distributional assumptions. To begin, we define two functions:
γ_P(x) := 𝔼_P(Y | X=x, Z = 1) - 𝔼_P(Y | X=x, Z = 0)
and
δ_P(x) := 𝔼_P(A | X=x, Z = 1) - 𝔼_P(A | X=x, Z = 0).
The first term is the effect of the IV on Y conditional on covariates, and the second is the effect of the IV on A conditional on the covariates. Lemma <ref> established that the LATE is identified via the parameter χ_0 := Γ_0/Δ_0, where Γ_0 := P_0γ_0 and Δ_0 :=P_0δ_0. Note that the relevance, effective random assignment, and monotonicity assumptions ensure that Δ_0 is non-zero. Based on this identification result, we motivate a plug-in estimator χ := Γ/Δ that is the ratio of two estimators.
The efficient influence functions of Γ_P and Δ_P at P in a nonparametric model are equivalent to those for the ATE
and are <cit.>:
Γ̇^*_P := (y,z,x) ↦2z - 1/π_P(x,z){y - μ_P(x,z)} +
γ_P(x) - Γ_P
and
Δ̇^*_P :=(a,z,x) ↦2z - 1/π_P(x,z){a -
λ_P(x,z)} + δ_P(x) - Δ_P
These two influence functions involve three nuisance functions: μ_P(x,z) := 𝔼_P(Y | X=x, Z = z), λ_P(x,z) := 𝔼_P(A | X=x, Z = z), and π_P(x,z) := P(Z = z | X=x). The first is an outcome model, the second is a model for treatment, and the third is a model for the IV. We discuss estimation of these nuisance functions below. Since influence functions are mean-zero by definition, we often consider “uncentered" influence functions whose mean corresponds to the target estimand. To distinguish uncentered influence functions from the original influence functions, we use the notation Γ̇_P without the asterisk. The uncentered influence functions of Γ_P and Δ_P are given by
Γ̇_P := (y,z,x) ↦2z - 1/π_P(x,z){y - μ_P(x,z)} +
γ_P(x)
and
Δ̇_P := (a,z,x) ↦2z - 1/π_P(x,z){a - λ_P(x,z)} +
δ_P(x).
The function Γ̇_0 has the property that P_0Γ̇_0 = Γ_0 and similarly, P_0Δ̇_0 = Δ_0 for Δ̇_0.
The estimators Γ and Δ we propose are “one-step” estimators, corresponding to the sample mean of the estimated uncentered influence functions. To construct the estimator, we first need to estimate the unknown nuisance functions: μ_P, λ_P and π_P. In the DRML framework, the analyst selects the estimation method for these nuisance functions. To avoid model misspecification, one could select flexible ML methods such as random forests or generalized boosting for this task. However, the estimation process is completely agnostic to the choice of nuisance estimation method. For example, one could estimate μ_P, λ_P, or π_P with least squares instead. In this case, the estimator would be doubly-robust but would also be fully parametric. As such, reducing bias in the DRML framework depends on selecting suitably flexible estimation methods for the nuisance functions.
Next, we compute the ratio of the plug-in estimators of uncentered influence functions as follows:
χ := Γ/Δ= ℙ_nΓ̇_P/ℙ_nΔ̇_P = ℙ_n[2Z - 1/π{Y -
μ} +
γ]/ℙ_n[2Z - 1/π{A -
λ} +
δ],
omitting inputs to nuisance estimates.
We introduced the subscript P above to denote that all unknown quantities are replaced by their respective estimators.
In the DRML framework, sample splitting and cross-fitting is typically part of the estimation process (e.g., <cit.>). As such, we also pursue sample splitting, which will slightly modify the procedure described above. One straightforward sample-splitting scheme is to randomly split the study population into two samples D_0 and D_1. First, using D_0, we estimate the nuisance terms μ_0, λ_0, and π_0 and any tuning parameters needed for the ML methods. Then, we compute an estimate of χ_0 as in (<ref>) by plugging in data from D_1 into the previously fitted nuisance functions. Moreover, to increase efficiency, the role of D_0 and D_1 can be swapped and the final estimate can be taken as the average of the two estimates. In fact, more than two sample splits can be utilized for extra stability. This procedure is known as cross-fitting. In addition to using sample splitting for the estimation of the nuisance terms, we can use an ensemble of ML methods. That is, we can estimate each nuisance term with multiple or different ML methods and combine the results to further reduce the possibility of model misspecification. For instance, we can estimate μ_0 using both random forests and boosting and then combine the two estimates by a meta-learner such as Super Learner <cit.>.
One of the main benefits of semiparametric theory is that it facilitates the construction of (1-α)-level confidence intervals for nonparametric estimators. The limiting variance of the resulting estimator coincides with the variance of the efficient influence function, divided by n. We first define the efficient influence function of χ_0 in the following lemma:
Let χ_P = Γ_P/Δ_P be the ratio of the two parameters Γ_P and Δ_P, whose influence functions are Γ̇^*_P and Δ̇^*_P, respectively. The influence function of χ_P at P is given by
χ̇^*_P := (y,a,z,x) ↦1/Δ_P(2z - 1/π_P(x,z)[y - μ_P(x,z) -
χ_P{a - λ_P(x,z)}] +
γ_P(x) - χ_Pδ_P(x)).
In a nonparametric model, the influence function above is also the efficient influence function.
The proof of this Lemma is provided by <cit.> as Example 6. The next result characterizes the asymptotic distribution of the estimator χ and provides a basis for statistical inference. We now introduce the following additional conditions:
* π-π_0max(λ-λ_0, μ-μ_0) = o_P(n^-1/2)
* π-π_0 =o_P(1), μ-μ_0 =o_P(1) and λ-λ_0 =o_P(1)
* |Y| is bounded almost surely
where f := (P_0f^2)^1/2 for any P_0-square-integrable function of O. Condition <ref> requires the nuisance estimators to converge at certain rate. and will be satisfied when all estimators converge faster than n^-1/4, but it also extends to more general settings. We are now ready to state the weak convergence of the estimator χ.
Assuming <ref>, <ref>, <ref>, Δ_0 > 0, and there exist ε_1, ε_2> 0 such that ε_1 < P_0(Z = 1 | X) < 1-ε_1 and ε_2 < π(X, 1), π(X, 1)< 1-ε_2 almost surely, then
n^1/2(χ- χ_0)d⟶ N(0, P_0χ̇^*2_0).
We note that <ref> can be relaxed to a moment condition. The proof of this Lemma is provided in the Supplementary Material. With weak convergence of the estimator χ established, we can motivate the following (1-α)-level Wald-style confidence interval based on its asymptotic normality:
χ± n^-1/2q_1-α/2(ℙ_nχ̇_P^*2)^1/2
where q_p is the pth quantile of the standard Normal distribution and χ̇^*_P is an estimator of the influence function whose expression is given by Lemma <ref>.
We summarize the DRML procedure outlined in this section in the following algorithm:
§.§ DR-Learner for Conditional LATE
Next, we develop a DRML estimator for the conditional LATE. Unlike the estimator we developed for the LATE, it is not straightforward to extend the DRML framework to estimation of the CLATE. This is primarily because the CLATE is an infinite-dimensional object and does not possess the necessary pathwise differentiability property required for the existence of an influence function <cit.>. To address this challenge, we adapt the DR-Learner from <cit.> to our IV setting. We now outline a meta-learner algorithm for the estimation of CLATEs:
The procedure for estimating CLATEs in Algorithm <ref> differs from the one for estimating LATEs in Algorithm <ref> in a crucial way. Step 3 involves an additional regression step, where a set of so-called psuedo-outcomes (i.e., the estimated uncentered influence functions) are regressed on the effect modifiers V, and the resulting estimator is the ratio of the estimated conditional expectation evaluated at v_0. Note that this additional regression step can be done with any nonparametric method. As before, so long as flexible estimation methods are used for the nuisance functions, this method is fully nonparametric.
Nonparametric inference for irregular functionals with no influence function is an active area of research <cit.>. Although the literature typically studies the limiting distribution for specific estimators, the behavior of our proposed estimator heavily depends on the particular regression function estimator used in step 3 of Algorithm <ref>. Here, we use the bootstrap to estimate the variance of the resulting estimator. Under some conditions, this approach works with many choices of the regression procedure used in step 3. Specifically, we repeat steps 2–4 in Algorithm <ref> over the bootstrap samples of D_1. The (1 - α)-level bootstrap confidence interval is given by the α/2th and (1-α/2)th quantiles of the estimator based on bootstrap samples.
We note that traditional bootstrap theory requires repeating the entire process from steps 1–4. However, it is practically infeasible to repeat step 1 as it typically involves fitting complex ML methods with large data, which in our case is time consuming. Establishing formal theoretical requirements for the bootstrap procedure when step 1 is computationally expensive is an interesting avenue for future research. In this work, we proceed by assuming that the nuisance estimation errors from step 1 are negligible compared to the bootstrap errors from steps 2–4, which is plausible for larger sample sizes.
§ NONPARAMETRIC PROFILING AND SENSITIVITY ANALYSIS
In this section, we develop DRML-based methods for two key analytic tasks related to the monotonicity assumption: profiling and sensitivity analysis.
§.§ Profiling
Profiling, which refers to calculating descriptive statistics for the complier, always-taker, and never-taker populations, is often used to infer whether the LATE might differ in magnitude from the ATE. This has been recommended as a necessary part of any LATE-focused IV analysis <cit.>. For instance, we can calculate whether the healthiest patients are likely to be always-takers, and the sickest patients are never-takers. In such case, the LATE most likely differs from the ATE, as operative care would likely have different effects among these sub-populations.
The concept of profiling was first outlined for compliers in <cit.>. Specifically, <cit.> use Abadie's κ-weights from semiparametric instrumental variable methods to estimate complier profiles. <cit.> extended complier profiling methods and introduced the technique to health service research and epidemiology. Later research extended profiling to the always-taker and never-taker subpopulations <cit.>. The analytic goal of profiling is to compare the covariate distributions for the complier, always-taker and never-taker populations to the overall patient population profile. If profiling reveals that the complier subpopulation is different with respect to observable covariates that are likely to be predictive of treatment effect magnitude, this suggests that the LATE may not be close to the ATE.
We propose DRML-based methods for nonparametric density estimation of the covariate distributions for compliers, always-takers, and never-takers. We first present the method for a discrete random variable V. Assuming SUTVA, <ref>, <ref>, <ref> and 0 < P(Z=1| X) < 1 almost surely, we can identify the density for baseline covariate at V=v_0 among the compliers as follows:
P(V=v_0 | A(1) > A(0)) =_P[δ_P(X)1(V=v_0)]/_P[δ_P(X)]
We provide a derivation of the above identification result in the Supplementary Material. We note that <ref> is not required for the identification above. The denominator of the estimand (<ref>) is identical to that of the standard LATE, which can be estimated as ℙ_nΔ̇_P. The uncentered influence function for the numerator of (<ref>) is simply given by (y, a, z, x') ↦Δ̇_0(a,z,x')I(v'=v_0). Here, v' is the component of x' corresponding to V ⊆ X. This follows from so-called the product rule of influence functions; Example 2 of <cit.> provides an analogous derivation for the case with V=X. The DRML-based estimator of (<ref>) is then as follows:
ℙ_nΔ̇_P1(V=v_0)/ℙ_nΔ̇_P = ℙ_n[(2Z - 1/π{A -
λ} +
δ)1(V=v_0)]/ℙ_n[2Z - 1/π{A -
λ} +
δ].
This estimator is also consistent when ℙ_nΔ̇_P is consistent in view of the continuous mapping theorem, and thus inherits the double-robust property of the ATE estimator Δ.
To construct (1-α)-level confidence intervals, we can use Lemma <ref> in the context of our estimator except Γ̇^*_P and Γ_P are now replaced with Δ̇_P(A, Z, X)1(V=v_0)-_P[δ_P(X)1(V=v_0)] and E_P[δ_P(X)1(V=v_0)], respectively.
Similarly for always-takers and never-takers, we have the following identification results:
P(V=v_0 | A(1) = A(0)=1)
= _P[λ_P(X, 0)1(V=v_0)]/_P[λ_P(X, 0)]
and
P(V=v_0 | A(1) = A(0)=0) = _P[{1-λ_P(X, 1)}1(V=v_0)]/_P[1-λ_P(X, 1)].
For a continuous covariate V, we propose a kernel density estimator of p(v_0 | A(1) > A(0)):
ℙ_nΔ̇_PK_v_0,h/ℙ_nΔ̇_P where K_v_0,h:= v↦1/hK(v-v_0/h).
Here, K is a non-negative kernel function and h>0 is a so-called smoothing bandwidth parameter. This estimator is consistent assuming that h ⟶ 0, nh ⟶∞ and the second derivative of the true density function is bounded. Despite the consistency of the estimator, valid statistical inference for this parameter is challenging to obtain. When the bandwidth parameter is selected to achieve the optimal rate of convergence of the estimator, the smoothing-bias becomes asymptotically non-negligible, so that the confidence intervals are no longer centered at the target parameter <cit.>. Alternatively, <cit.> studies the estimation of a covariate-adjusted density function under a semiparametric framework. They consider the projection of the target density function onto a working model with respect to a chosen metric. This methodology can be explored in the context of profiling as a future work.
§.§ Sensitivity Analysis
In Section <ref>, we raised concerns about the validity of the monotonicity assumption in our study. We outlined one plausible scenario where defiers may exist: when a surgeon's preferences represent a consideration of various risks and benefits, cases may arise where a patient is treated differently from a physician's overall preference, resulting in defiers.
To address the potential violation of monotonicity, we conduct a novel sensitivity analysis to examine the robustness of our estimates to such violations. <cit.> demonstrated that when monotonicity is violated, the IV estimand can be written as the sum of the standard IV estimand and a term that depends on two parameters, δ_1 and δ_2:
𝔼_0[Y(1)-Y(0) | A(1) > A(0)]=χ_0 +δ_1δ_2/Δ_0.
where δ_1 and δ_2 are defined as:
δ_1 := P_0(A(1) < A(0))
δ_2 :=𝔼_0[Y(1)-Y(0) | A(1) < A(0)]-_0[Y(1)-Y(0) | A(1) > A(0)].
In words, δ_1 represents the proportion of defiers, a value between 0 and 1, and δ_2, on the other hand, represents the difference in average treatment effects between the defiers and compliers. When the outcome is binary, δ_2 must take a value between -2 and 2. The above expression recovers the standard identification result of the LATE when either δ_1 or δ_2 is zero. On the other hand, values of δ_1 or δ_2 that are large in magnitude may imply a discrepancy between the standard IV estimand and the LATE.
We can estimate the standard IV estimand and Δ_0 using the DRML-based methods and treat δ_1 and δ_2 as free parameters in our sensitivity analysis. We now propose a simple analysis based on the mapping:
ξ:= (δ_1, δ_2) ↦χ +δ_1δ_2/ℙ_nΔ̇_P
which provides the estimated difference between the LATE and the IV estimand. This mapping is consistent when χ is consistent, implying the double robustness of the sensitivity mapping.
In our application, we plot the estimated upper or lower bounds on the LATE as a joint function of both δ_1 and δ_2, and find a set of δ_1 and δ_2 such that {δ_1, δ_2: ξ(δ_1, δ_2) = 0}. These values correspond to the minimal violation of monotonicity resulting in a sign change in the estimated LATE. Based on subject matter knowledge, we then evaluate whether the values of δ_1 and δ_2 at this frontier appear larger or smaller relative to plausible values for the study. Moreover, given a range of plausible values for δ_2, we can estimate and interpret the magnitude of δ_1 necessary to render ξ zero.
§ SIMULATION STUDY
Next, we investigate the finite-sample properties of the DRML IV estimator in a simulation study. Our aim is to compare the DRML IV estimator to the standard TSLS estimator across a range of scenarios to understand the comparative performance of each method. We analyze three different scenarios that capture different levels of nonlinearity in the data generating process (DGP). We also vary the sample size to the probe the large-sample performance of the DRML IV estimator compared to TSLS. The results from the first scenario are reported in this section, while the results from the other two scenarios are provided in Supplementary Material. First, we describe the common elements of the DGP across the scenarios we consider.
Across all the simulation scenarios, we generate a set of baseline covariates: X = (X_1, X_2), where X_1 ∼Unif(-1,1), X_2 ∼Bernoulli(0.3), such that X_1 X_2, and an unobserved covariate U ∼Unif(-1.5, 1.5) such that U X. Generation of Z, A, and Y then proceeds on the basis of various functional forms for π_0(x, z) = P_0(Z = z | X = x), λ_0^†(x, z, u) = 𝔼_0(A | X = x, Z = z, U = u), and μ_0^†(x, a, u) = 𝔼_0(Y | X = x, A = a, U = u). Specifically, we generate the instrument as Z ∼Bernoulli(π_0(X, 1)), the treatment as A ∼Bernoulli(λ_0^†(X, Z, U)), and the outcome as Y ∼ N(μ_0^†(X, A, U), 0.2^2). For the specification of μ_0^†, we adopt an additive model μ_0^†(X, A, U) = r(X)A + s(X) + 1.5 U, where r(X) represents the conditional average treatment effect, and s(X) the baseline mean of the outcome.
For each simulation scenario, described in detail below, we vary the sample size n and the functional forms of π_0, λ_0^†, r and s. We note that, with respect to the observed data distribution, any model λ_0^† is consistent with the monotoncity assumption. That is, we could strictly enforce monotonicity by generating
A(0) ∼Bernoulli(λ_0^†(X, 0, U)), then
A(1) = A(0) + (1 - A(0))Bernoulli(λ_0^†(X, 1, U) - λ_0^†(X, 0, U)/1 - λ_0^†(X, 0, U)),
and finally set A = (1 - Z)A(0) + ZA(1), but this also results in 𝔼_0(A | X, Z, U) = λ_0^†(X, Z, U). However, explicit generation of (A(0), A(1)) as above is useful for computing the true underlying LATE in the scenarios below: the LATE is the mean of r(X) given A(1) > A(0).
For each scenario, we compute and evaluate three estimators of the LATE. First, we use the TSLS estimator using the package in , which assumes the main effects of X_1, X_2 are linear. Next, we use the DRML IV estimator, which is based on estimates of the nuisance functions π_0, λ_0, μ_0 — the latter two are induced by λ_0^†, μ_0^†, and the distribution of U. We employ one DRML IV estimator that uses parametric models for the nuisance functions, including only linear main effects for X in each. We then employ a second DRML IV estimator where we estimate the nuisance functions using an ensemble of random forest fits. Specifically, the ensemble is based on fits from , , and using the package in . This set of estimators allows us to isolate instances in which flexible estimation of the nuisance functions is critical to reduce model misspecification. We refer to these as the parametric and nonparametric DRML estimators respectively. For all three methods, we record bias and root-mean squared error (RMSE).
In the first scenario, the DGP departs from linear models with nonlinearity across a number of the models in the DGP. Specifically, we use the following functional forms for the models in the DGP:
* π_0(X, 1) = expit(0.4 X_1 - 0.8X_2 + 0.4
1(X_1 > 0))
* λ_0^†(X, Z, U) = expit(-0.3 - 0.4X_1 - 0.14 X_2
+ 1.1 Z - 0.55 X_1 Z - 0.7 1(X_1 > 0) + 0.7 U)
* r(X) = - 4 X_1 + 6 {X_2 - 0.3} - 4
{1(X_1 > 0) - 0.5}
* s(X) = 40 - 7X_1 - 8X_2 + 101(X_1 > 0)
We compute the true LATE using the quantities in the DGP as described earlier. In this DGP, there is a threshold effect for X_1 in each of the models. The nonparametric DRML IV estimator uses flexible estimation methods for the nuisance functions, and this should reduce bias from model misspecfication. However, we expect the parametric DRML IV estimator and TSLS to be biased due to model misspecification. Additionally, we are also interested in whether the nonparametric DRML IV estimator remains efficient as the sample size grows.
Figure <ref> contains the results from the simulation under scenario 1. The nonparametric DRML IV estimator does display some minor bias, however, the bias decreases with larger sample sizes. In theory, we should be able to reduce this bias further by including additional learners into the ensemble. Notably, the bias for the nonparametric DRML IV estimator is less than that from either of the parametric methods. In fact, the bias for TSLS is almost three times larger than for the nonparametric DRML IV estimator. Critically, these results highlight the need to use flexible methods of estimation for the nuisance functions. The parametric DRML IV estimator does reduce bias compared to TSLS, but not nearly to the extent of the nonparametric DRML IV estimator.
Next, we review the results for RMSE. When the sample size is smaller, here 500, the difference in RMSE between TSLS and DRML IV is smaller. However, once the sample size is 1000, the nonparametric DRML IV estimator clearly dominates the other two methods in terms of RMSE, with additional gains being possible with larger sample sizes. This result is not surprising given the level of bias present for the parametric methods of estimation.
§ THE ASSESSMENT OF THE EFFICACY OF SURGERY FOR CHOLECYSTITIS
§.§ DRML IV Estimates
We now apply our methods to the analysis of whether operative management is effective for patients with cholecystitis. First, we provide a more in-depth discussion of the data and specifically the instrument. As we noted above, the data include a small set of demographic measures including indicators for race, sex, age, and insurance type. Next, the data include measures of baseline patient frailty. The measures of frailty are indicators for severe sepsis or septic shock and pre-existing disability. Next, there are indicators for 31 comorbidities based on Elixhauser indices <cit.>. Comorbidities are prior medical conditions that may complicate treatment for cholecystitis. These comorbidities include conditions such as anemia, hypertension, and obesity. Finally, we adjust for hospital level effects using fixed effects for each hospital, which is equivalent to conducting a within-hospital analysis. We consider such a within-hospital approach an essential part of the analysis, since it implies that we are comparing patients who receive care in the same hospital, which should hold fixed other systemic factors of care that might affect outcomes. In our judgement, this increases the plausibility of unconfoundedness and the exclusion restriction.
We measure the IV in our study based on the percentage of times a surgeon operates when presented with an EGS condition (TTO). We plot the TTO distribution in Figure <ref>. While many surgeons operate more than 50% of the time, there is wide variation in the TTO distribution. For a surgeon at the average of the TTO distribution, he or she operates 67% of the time.
Next, we split TTO at the median and calculated the percentage of times a patient received surgery for cholecystitis for the two categories. We find that patients that received care from a surgeon with a TTO below the median had an operation 45% of the time. For patients that received care from a surgeon with a TTO above the median, he or she had an operation 95% of the time.
Our primary outcome is an adverse event following treatment. The measure of adverse events is an indicator that is 1 if a patient either died or had a prolonged length of stay. Prolonged length of stay (PLOS) is defined as hospital and operation-specific length of stay being greater than the 75th percentile.
The first phase in the analysis is estimating the effect of surgery on adverse outcomes using IV methods, i.e., targeting the LATE. In this step of the analysis, we contrast DRML methods with more standard parametric methods. Next, we profile the key principal strata and review the likelihood of treatment by complier-status interactions. We then explore whether the treatment effects are heterogenous and seek to understand whether any variation is associated with key baseline covariates.
For our DRML estimator of the LATE, one key practical decision necessary for the applied analysis is the choice of ML methods for estimation of the nuisance functions. We can select either a single learner such as a random forest or instead we can use an ensemble of learners. In our analysis, we use an ensemble of three different learners: a generalized linear model, a generalized additive model, and a random forest. Figure <ref> displays the results for three different estimation strategies: unadjusted, adjustment via a parametric model (TSLS), and adjustment via our DRML estimator. For the unadjusted analysis, we estimate the risk of an adverse event as function of being exposed to surgery without controlling for any baseline covariates. In the unadjusted analysis, we find that the risk of an adverse outcome is nearly 8% lower for patients that had surgery compared to patients that underwent medical management. Given the invasive nature of surgery, it may be the case that healthier patients are more likely to be selected for surgery. This bias may well be in operation, given that the two adjusted IV estimates are nearly 50% smaller than the unadjusted estimate. That is, once we adjust for observed covariates and account for unobserved confounders with a LATE-focused IV approach, we find that the effect of surgery is still beneficial — the risk of an adverse outcome is approximately 4% lower for the compliers. That is, the LATE in our context is the effect of surgery for those patients that had surgery they had a surgeon with a higher preference for operative care. We explore how the compliers differ from always-takers and never-takers next. In addition, the confidence intervals do not include zero. The DRML IV estimates do not differ significantly from estimates based on the fully parametric TSLS. In brief, for any given analysis, the additional flexibility of ML methods may not change the results compared to parametric methods. However, the additional flexibility protects against possible bias from model misspecification.
§.§ Profiling
Next, we conduct a profiling analysis to describe the characteristics of patients in each of the key principal strata. Figure <ref> contains covariate profiles for compliers, always-takers, and never-takers for a subset of key covariates. Across many of the covariates the estimates for the never-taker subpopulation tend to be imprecisely estimated due to limited sample size. However, it is often the case that the always-taker and complier populations differ in important respects. For example, always takers are much less likely to be septic compared to compliers. In addition, they are less likely to have some form of disability or be Medicare patients. Overall, this pattern is consistent with other work that has found that for condition like these always-takers tend to be healthier than compliers, while never-takers tend to display greater frailty <cit.>.
Next, we conduct a more refined profiling analysis for age. Extant profiling analysis approaches based on parametric methods would estimate the average age among each of the principal strata. Our DRML IV methods allow us to estimate a smoothed density for continuous covariates by principal strata. Figure <ref> contains the smoothed age density for compliers, always-takers, and never-takers. In the first panel, we observe that always-takers are mostly likely to be patients between 30 and 60. That is, younger patients are more likely to be surgical patients—most likely due to the fact that it is assumed they are more likely to tolerate the invasive nature of surgery. The smoothed density for compliers rises with age until it levels off around age 50. As such, we observe that a surgeons TTO tends to be decisive for older patients.
§.§ Heterogenous Treatment Effects
Next, we consider the possibility that the effect of surgery varies by key patient subgroups. In our analysis, we focus on three key baseline covariates: age, sepsis, and the number of comorbidities. We identified these three covariates as important effect modifiers, since the effectiveness of surgery is often a function of baseline health. Patients that are generally healthy before they undergo surgery tend to respond better to surgical treatment. As such, older patients, patients with multiple comorbidities, and those with sepsis may be at higher risk for an adverse event even if surgery is generally effective on average.
As we outlined above, the DR-Learner approach to heterogenous treatment effect estimation provides estimates of individual level treatment effects (ITEs) that vary with effect modifiers. The ITEs are then aggregated by the key effect modifiers to allow us to observe whether the treatment effect of surgery varies by that covariate. We begin with Figure <ref> which displays the distribution of ITEs for the patient population in the study. The distribution of the ITEs provides some insight into the overall level of effect heterogeneity in our analysis. In Figure <ref>, the dashed line marks the sample average, which corresponds to the DRML estimate reported in Figure <ref>. We observe that a large part of the distribution is shifted below zero which represents a beneficial effect of surgery. However, the average effect also hides considerable variation. That is, we also observe that a substantial mass of the distribution of individual treatment effects is concentrated above zero, which implies that for these patients surgery caused an adverse outcome.
Next, we seek to investigate whether specific baseline covariates explain variation in the individual level treatment effects. To that end, we aggregate the individual level treatment effects to estimate conditional average treatment effects for specific patient subgroups. Critically, our method does not impose any linearity assumptions on how the estimated effects vary as a function of a multi-valued effect modifier such as age or the number of comorbidities. First, we use the number of comorbidities to estimate CLATEs. We use the number of comorbidities, since it is often a strong predictor of adverse outcomes. Figure <ref> contains the average treatment effect conditional on the number of comorbidities. Based on the results in this figure, it does not appear that the number of comorbidities explain the variation in treatment effects. That is, the average effect within each subgroup is very close to if not identical to the estimated LATE.
Another key advantage of using DRML methods for treatment effect heterogeneity is that we can investigate whether effects vary by more complex patient subgroups. Next, we estimate how the treatment effects vary by sepsis, age, and the number of comorbidities. That, is we might expect older, septic patients with a higher number of comorbidities to be at the highest risk for an adverse effect. Figure <ref> contains two heatmaps that each display the CLATE by age, the number of comorbidities, and sepsis status. First, we find that septic patients are more likely to have an adverse outcome. That is, for non-septic patients, we observe that the effect of surgery is minimal to protective, with the benefit increasing with age. For septic patients, the probability of an adverse event increases with the number of comorbidities. Interestingly, older patients with a small number of comordibities also benefited from surgery. Thus, it appears that sepsis and comorbidities appear to be important effect modifiers. We stress that these results should be considered as exploratory. Validation of these results would require hypothesis testing with a second data source.
§.§ Sensitivity Analysis
Lastly, we investigate whether our study results are robust to violations of the monotonicity assumption. As we outlined above, instruments like TTO may be prone to violations of the monotonicity assumption. This can occur when a physician treats a patient contrary to his or her general preference for surgery. In our proposed sensitivity analysis, we plot how the upper bound for the estimated LATE varies as a function of the two sensitivity parameters: δ_1, and δ_2. We plot the results in Figure <ref>. In Figure <ref>, the x-axis corresponds to δ_1 and y-axis corresponds to δ_2. The dashed line represents the frontier where the upper bound on the LATE equals zero.
In our context, this dashed line divides the plot into two regions. In the lower-left region, the upper bound on the LATE indicates that surgery is beneficial, since it reduces the risk of an adverse event. In the upper-right region, the upper bound indicates on the LATE indicates that surgery is potentially harmful, since a positive LATE corresponds to an increased risk of an adverse event with surgery. How do we use this plot to assess sensitivity? We think it is helpful to focus in on a few values for δ_1, and assess the corresponding ranges of δ_2 values that are consistent with the sign of the original LATE estimate. For example, we note that if 25% of the population were defiers, the sign on the LATE would be unchanged if the average difference in defier and complier risks is less than 10%. If the proportion of defiers is less than 5%, the sign of the estimated LATE also remains unchanged if the average difference in defier and complier risks is 50% or below.
We observe that our primary estimate of the LATE (assuming monotonicity) is robust, with respect to its sign, over most reasonable values of the sensitivity parameters. For instance, we can allow up to 10% of defiers if surgical treatment for them is up to 20% more likely to induce adverse events compared to compliers. Based on these results, we may conclude that the sign of treatment effect is robust to reasonable violations of monotonicity.
§ CONCLUSION
Methodologically, we offer a unified estimation framework for IV methods that combines ML methods to avoid bias from model misspecification, while maintaining established inferential properties. Our simulation study demonstrates he effectiveness of the DRML IV estimator in reducing bias and achieving parametric-like estimation error rates and inferential properties. Furthermore, we leveraged the DRML framework to develop novel methods for other aspects of IV designs. Specifically, we used DRML methods to profile compliance classes under principal stratification, and we extended the DR-Learner for heterogeneous effects to the IV framework. This enables us to examine the potential variation in the effect of surgery across different patients by assessing key effect modifiers. Finally, we developed a new method of sensitivity analysis targeted at the monotonicity assumption, which can be questionable in the case of IVs such as TTOs. Importantly, our findings display the robustness of our estimates when faced with fairly large deviations from this assumption.
Our study also contributes to the developing literature on optimal treatment strategies for EGS conditions. Given the ethical issues with randomization, the literature on this topic has been generally based on observational studies using IV methods. EGS conditions are heterogeneous, and, as a result, granular details are required about specific conditions to provide effective evidence-based guidance to clinicians. As such, various studies have focused on developing specific guidance for given EGS conditions and subgroups <cit.>. Our study contributes to this body of evidence by focusing on cholecystitis — inflammation of the gallbladder, which is one of the most common EGS conditions. Our finding indicates that, on average, operative management for cholecystitis led to lower rates of adverse events. Moreover, we found that the number of comorbidities was not predictive of variation in the effect of surgery. However, age emerged as a strong effect modifier, as patients over the age of 70 seemed to be more susceptible to adverse events following surgery.
There are several avenues for future developments. Firstly, a broader set of simulations and applied analyses would be valuable in providing guidance on the most robust ensemble of learners across diverse settings. Secondly, more investigation is needed to elucidate the formal properties of the proposed DR-Learner of the CLATE under various methods for second-stage regression. Thirdly, it will also be of theoretical interest to determine minimax rates and corresponding optimal estimators of the CLATE, as was explored for the conditional average treatment effect in <cit.>. Finally, it will be important to extend the ideas proposed in this paper to IV settings that do not assert monotonicity, moving beyond the CLATE and LATE, e.g., under certain structural models for the outcome, or alternatively constructing tight bounds on the (conditional) average treatment effect.
asa
§
11151em
§.§
11151em
assumptionAssumption
⊥⊥
Supplement to “Doubly robust machine learning for an instrumental variable study of surgical care for cholecystitis”
§ CALCULUS OF INFLUENCE FUNCTIONS
In this section, we use the notation P to represent a generic distribution that belongs to a statistical model denoted as 𝒫. Let P_0 ∈𝒫 represent the unknown data-generating distribution of observations O, and let ℙ_n denote its empirical distribution function associated with {O_1, …, O_n}. For any P-integrable function f, we define Pf = ∫ f dP. Specifically, we have ℙ_n f=1/n∑_i=1^n f(O_i). We also adopt the notation f= (P_0 f^2)^1/2. For a, b ∈ℝ, we define a ∧ b = min(a, b).
We consider ψ_1, ψ_2 as real-valued functionals 𝒫↦ℝ where 𝒫 is a collection of distributions or a nonparametric model. We assume that these functionals are differentiable in a suitable sense <cit.>, allowing for the existence of corresponding influence functions. We denote the influence functions as φ̇_1^* and φ̇_2^*, and the uncentered influence functions as φ̇_1 and φ̇_2, meaning that P_0φ̇_j=ψ_j(P_0) for j = 1, 2. Define an estimator of ψ_j(P_0) = P_0 φ̇_j given by ψ_j = ℙ_n φ_j, the sample mean of estimated uncentered influence functions. Then ψ_j satisfies the following error decomposition:
ψ_j - ψ_j(P_0) = ℙ_n φ_j +ℙ_n φ̇_j-ℙ_n φ̇_j- P_0 φ̇_j=ℙ_n φ̇^*_j + R_1+ R_2
where
R_1 := (ℙ_n-P_0)(φ_j-φ̇_j) and R_2:=P_0(φ_j-φ̇_j).
Suppose for now that both remainder terms R_1 and R_2 are o_P(n^-1/2), then the estimator is asymptotically linear:
ℙ_n φ_j - P_0 φ̇_j =ℙ_n φ̇^*_j + o_P(n^-1/2).
This representation immediately establishes the consistency of the estimator by the weak law of large numbers. Moreover, if we assume that the variance of φ̇^*_j is finite, we can establish the asymptotic normality of the estimator in view of the central limit theorem. Consequently, this informs the construction of confidence intervals that are asymptotically exact.
The first remainder term R_1 is commonly referred to as an empirical process term. When the estimator φ_j is constructed using independent data from ℙ_n, for instance by sample-splitting or cross-fitting, Lemma 2 of <cit.> states that:
R_1 = O_P(n^-1/2φ_j-φ̇_j).
This term becomes o_P(n^-1/2) if φ_j-φ̇_j=o_P(1), or in other words, φ_j converges in quadratic mean to φ̇_j. The term R_2 is often known as an “asymptotic bias” or “asymptotic drift” term, and is typically second-order, i.e., can be bounded by a product or square of nuisance function errors.
Therefore, R_2 = o_P(n^-1/2) is implied when, for example,
relevant nuisance errors converge in L_2-norm to zero at rate o_P(n^-1/4). This dependence on the product of nuisance error biases has been referred to as rate double-robustness <cit.>.
In sum, asymptotic linearity of the estimator ψ_j is guaranteed when φ_j-φ̇_j=o_P(1), R_2 = o_P(n^-1/2), and when employing sample-splitting.
The next result shows that the fraction of two asymptotically linear estimators is also asymptotically linear.
Suppose for j=1,2, ℙ_nφ_j-P_0φ̇_̇j̇ = ℙ_nφ̇^*_j + o_P(n^-1/2) where φ̇^*_j is a mean-zero function. Assuming there exists ε >0 such that |ℙ_nφ_2| ∧ |P_0φ̇_2| > ε,
ℙ_nφ_1/ℙ_nφ_2- P_0φ̇_̇1̇/P_0φ̇_̇2̇ = ℙ_n{(P_0φ̇_̇2̇)^-1(φ̇^*_1 -φ̇^*_2P_0φ̇_̇1̇/P_0φ̇_̇2̇)}+ o_P(n^-1/2).
By adding and subtracting relevent terms, we obtain
ℙ_nφ_1/ℙ_nφ_2- P_0φ̇_̇1̇/P_0φ̇_̇2̇ = 1/ℙ_nφ_2(ℙ_nφ_1-ℙ_nφ_2P_0φ̇_̇1̇/P_0φ̇_̇2̇)
=1/P_0φ̇_̇2̇(ℙ_nφ_1-ℙ_nφ_2P_0φ̇_̇1̇/P_0φ̇_̇2̇)+(1/ℙ_nφ_2-1/P_0φ̇_̇2̇)(ℙ_nφ_1-ℙ_nφ_2P_0φ̇_̇1̇/P_0φ̇_̇2̇).
For the first term of the above display, it follows that
ℙ_nφ_1-ℙ_nφ_2P_0φ̇_̇1̇/P_0φ̇_̇2̇ =(ℙ_nφ̇^*_1 +P_0φ̇_̇1̇)-(ℙ_nφ̇^*_2+P_0φ̇_̇2̇) P_0φ̇_̇1̇/P_0φ̇_̇2̇+ o_P(n^-1/2)
=ℙ_n(φ̇^*_1 -φ̇^*_2P_0φ̇_̇1̇/P_0φ̇_̇2̇)+ o_P(n^-1/2).
For the second term, we obtain that
(1/ℙ_nφ_2-1/P_0φ̇_̇2̇)(ℙ_nφ_1-ℙ_nφ_2P_0φ̇_̇1̇/P_0φ̇_̇2̇)
=(P_0φ̇_̇2̇-ℙ_nφ_2/ℙ_nφ_2P_0φ̇_̇2̇)((ℙ_nφ_1-P_0φ̇_̇1̇)+P_0φ̇_̇1̇(1-ℙ_nφ_2/P_0φ̇_̇2̇)).
By the assumption that both ℙ_nφ_2 and P_0φ̇_̇2̇ are bounded away from zero as well as the asymptotic linearity of both estimators, we conclude that
(1/ℙ_nφ_2-1/P_0φ̇_̇2̇)(ℙ_nφ_1-ℙ_nφ_2P_0φ̇_̇1̇/P_0φ̇_̇2̇) = O_P(n^-1) + O_P(n^-1) = o_P(n^-1/2).
This concludes the proof of the claim.
The proof of Lemma 3.2 follows as a corollary. We introduced the following conditions in the main text:
* π-π_0max(λ-λ_0, μ-μ_0) = o_P(n^-1/2)
* π-π_0 =o_P(1), μ-μ_0 =o_P(1) and λ-λ_0 =o_P(1).
* |Y| is bounded almost surely
First assuming that sample-splitting is used to estimate μ_0, λ_0 and π_0, Lemma 2 of <cit.> implies that the empirical process term R_1 is controlled so long as ‖φ_j - φ̇_j‖ = o_P(1), for j = 1, 2, which is implied by <ref>, <ref> as well as the assumption that π and π are bounded away from zero almost surely (See Example 2 in Section 4.2 of <cit.>).
We will now establish R_2 = o_P(n^-1/2). By the well-known product bias for the influence function of the average treatment effect functional <cit.>, and a standard application of the Cauchy-Schwarz inequality, we have
|P_0(Γ̇_P-Γ̇_0)| ≲μ-μ_0π-π_0 and |P_0(Δ̇_P-Δ̇_0)| ≲λ-λ_0π-π_0,
where we write “μ-μ_0” and “λ-λ_0” in place of μ(· , 0) - μ_0(·, 0) + μ(· , 1) - μ_0(·, 1) and λ(· , 0) - λ_0(·, 0) + λ(· , 1) - λ_0(·, 1), respectively (See Example 2 in Section 4.3 of <cit.>).
. Therefore, R_2 is controlled if μ-μ_0π-π_0=o_P(n^-1/2) and λ-λ_0π-π_0=o_P(n^-1/2), which is implied by <ref>. Next, we claim |ℙ_nφ_2| ∧ |P_0φ̇_2| > ε for some ε > 0. This is implied assuming Δ_0 > ε_1 and ε_3 < π(X)< 1-ε_3.
We now invoke Lemma <ref> where φ_1 and φ̇_̇1̇ correspond to Γ̇_P and Γ̇_0. Similarly, φ_2 and φ̇_̇2̇ correspond to Δ̇_P and Δ̇_0. Then Lemma <ref> states
χ- χ_0 = ℙ_nΓ̇_P/ℙ_nΔ̇_P - P_0Γ̇_0/P_0Δ̇_0 + o_P(n^-1/2)
= ℙ_n {1/Δ_0(Γ̇_0^*- Δ̇_0^*χ_0)}+ o_P(n^-1/2)
=ℙ_n{1/Δ_0(2Z - 1/π_0[Y - μ_0 -
χ_0{A - λ_0}] +
γ_0 - χ_0δ_0)} + o_P(n^-1/2).
The first term of above display corresponds to the sample mean of the efficient influence function of χ_0 as provided by Lemma 3.1 of the main text. Thus,
by multiplying both sides by n^1/2, it implies that
n^1/2(χ- χ_0) =n^1/2ℙ_nχ̇^*_0 + o_P(1).
Additionally, it follows that
Var(n^1/2ℙ_nχ̇^*_0) = P_0χ̇^*2_0 < ∞
assuming Var[Y] < ∞, Δ_0 > ε_1 and ε_2 < P_0(Z = 1 | X) < 1-ε_2. Therefore, we conclude
n^1/2(χ- χ_0)d⟶ N(0, P_0χ̇^*2_0).
in view of the central limit theorem and the Slutsky's theorem
§ PROFILING
For the derivation of the identification results, we frequently refer to the following assumptions from the main text:
* Relevance: P_0(A(1) = A(0)) ≠ 1,
* Effective random assignment: For all z,a ∈{0,1}, Z (A(z), Y(z, a)) | X,
* Exclusion restriction: Y(z,a) = Y(z',a) for all z,z',a ∈{0,1}, and
* Monotonicity: P_0(A(1) < A(0)) = 0.
The following (well-known) results also become useful.
Assuming
SUTVA, <ref>, <ref>, and 0 < P_0(Z = 1 | X) < 1 almost surely, then, for V⊆ X, it follows that
P_0(A(1) > A(0)| V=v) = _0(A| Z=1, V=v)-_0(A| Z=0, V=v),
P_0(A(1) = A(0)= 1| V=v) = _0(A| Z=0, V=v), and
P_0(A(1) = A(0)= 0| V=v) = _0(1-A| Z=1, V=v).
Let V' be X ∖ V such that X=(V, V'). Furthermore, we assume that V' follows the marginal density dQ. It then follows that
P_0(A(1) > A(0)| V=v) = _0(A(1) - A(0)| V=v)
= ∫_0(A(1) - A(0)| X=x) dQ(v')
= ∫{_0(A| Z=1, X)-_0(A| Z=0, X)} dQ(v')
= _0(A| Z=1, V=v)-_0(A| Z=0, V=v)
where the first equality follows by <ref>, the second and the last equalities follow by the tower property of expectations, and the third equality follows by SUTVA, <ref>, and 0 < P_0(Z = 1 | X) < 1 almost surely. Similarly, it follows that
P_0(A(0) = A(1) = 1 | V=v)
=∫_0(A(0)A(1) | X=x) dQ(v')
= ∫_0(A(0) | X=x) dQ(v') By <ref>
= ∫_0(A(0) | X=x, Z=0) dQ(v') By <ref> and P_0(Z=0| X)>0 a.s.
= ∫_0(A | X=x, Z=0) dQ(v') By SUTVA
= _0(A | V=v, Z=0)
and also that
P_0(A(0) = A(1) = 0 | V=v)
=∫_0(1-A(0)A(1) | X=x) dQ(v')
= ∫_0(1-A(1) | X=x) dQ(v') By <ref>
= ∫_0(1-A(1) | X=x, Z=1) dQ(v') By <ref> and P_0(Z=1| X)>0 a.s.
= ∫_0(1-A | X=x, Z=1) dQ(v') By SUTVA
= _0(1-A | V=v, Z=1).
This concludes the claim.
Given the results above, the identification associated with the profiling parameters proceed as follows:
Assuming
SUTVA, <ref>, <ref>, and 0 < P_0(Z = 1 | X) < 1 almost surely, then, for V⊆ X, it follows that
P_0(V=v_0| A(1) > A(0)) = _0[1(V=v_0) {_0(A | X, Z=1)-_0(A | X, Z=0)}]/_0{_0(A | X, Z=1)-_0(A | X, Z=0)},
P_0(V=v_0| A(1) = A(0)= 1) = _0[1(V=v_0) _0(A | X, Z=0)]/_0{_0(A | X, Z=0)}, and
P_0(V=v_0| A(1) = A(0)= 0) =_0[1(V=v_0) _0(1-A | X, Z=1)]/_0{_0(1-A | X, Z=1)}.
We denote by Q the marginal distribution of V. Then, it follows that
P_0(V=v_0 | A(1)>A(0)) =∫1(v=v_0) P(V=v| A(1)>A(0) ) d Q(v)
= ∫1(v=v_0)P_0(A(1)>A(0) | V=v) d Q(v) /P_0(A(1)>A(0))
=_0[1(V=v_0) {_0(A | X, Z=1)-_0(A | X, Z=0)}]/_0{_0(A | X, Z=1)-_0(A | X, Z=0)}
where the second equality follows by the Bayes rule and the last equality invokes the identification results provided by Lemma <ref> under all required assumptions. Following the analogous steps, we also obtain
P_0(V=v_0 | A(1)=A(0) = 1) =_0[1(V=v_0) _0(A | X, Z=0)]/_0{_0(A | X, Z=0)}
and
P_0(V=v_0 | A(1)=A(0) = 0) =_0[1(V=v_0) _0(1-A | X, Z=1)]/_0{_0(1-A | X, Z=1)}
in view of Lemma <ref>.
§.§ Estimation and inference
In this section, we discuss the estimation and inference of the parameter related to the profiling. Following the identification results given by Lemma <ref>, we define the complier profile ψ_co
ψ_co(P_0) := _0[1(V=v_0) {_0(A | X, Z=1)-_0(A | X, Z=0)}]/_0{_0(A | X, Z=1)-_0(A | X, Z=0)} = _0[1(V=v_0) δ_0(X)]/_0[δ_0(X)]
As discussed in the main text, the denominator of the above display can be estimated by ℙ_nΔ̇_P. We thus focus on the estimation of the numerator. We first derive the influence function of the functional _0[1(V=v_0) _0(1-A | X, Z=1)]. Following <cit.>, we denote by 𝕀𝔽 an operator that maps real-valued functionals to their (efficient) influence functions in a nonparametric model. The influence function of the numerator can be obtained as follows:
𝕀𝔽(_P[δ_P(X)1(V=v_0)])
= ∑_x∑_v𝕀𝔽{δ_P(x)p(x)}1(v=v_0)+δ_P(x)p(x)𝕀𝔽{1(v=v_0)}
= ∑_x∑_v𝕀𝔽{δ_P(x)p(x)}1(V=v_0)+δ_P(x)p(x){1(v=v_0)-P(v=v_0)}
= ∑_x∑_v(𝕀𝔽{δ_P(x)p(x)}+δ_P(x)p(x))1(v=v_0)-δ_P(x)p(x)P(v=v_0)
= Δ̇_P(a, z, x)1(v=v_0)-_P[δ_P(X)1(V=v_0)]
where the first line is the application of the product rule of the influence function <cit.> and the last line is derived by <cit.> as Example 2. This result relies on the assumption that V is a discrete random variable. When V is continuous, the influence function of v ↦1(v=v_0) does not exists since it fails to satisfy the required differentiability assumption. We now apply Lemma <ref> in this context and obtain the following result:
ℙ_nΔ̇_P1(v=v_0)/ℙ_nΔ̇_P- ψ_co
= ℙ_n{1/Δ_0(Δ̇^*_0 -(Δ̇_01(V=v_0)-P_0δ_01(V=v_0))ψ_co)}+ o_P(n^-1/2).
assuming (i) π-π_0λ-λ_0 = o_P(n^-1/2) and (ii) there exist ε_1, ε_2, ε_3 > 0 such that Δ_0 > ε_1, ε_2 < P_0(Z = 1 | X) < 1-ε_2 and ε_3 < π(X)< 1-ε_3 almost surely. Furthermore, we denote the influence function of ψ_co at P as follows:
φ̇^*_P, co := (a, z, x) ↦1/Δ_P(Δ̇^*_P -(Δ̇_P1(V=v_0)-Pδ_P1(V=v_0))ψ_co(P))
Then the (1-α)-level confidence interval can then be obtained by
ℙ_nΔ̇_P1(V=v_0)/ℙ_nΔ̇_P± n^-1/2q_1-α/2(ℙ_nφ̇^*2_P, co)^1/2
where ℙ_n φ̇^*2_P, co is an estimated asymptotic variance of the estimator.
Following the identification results given by Lemma <ref>, we define the always-taker profile ψ_at
ψ_at(P_0) := _0[1(V=v_0) _0(A | X, Z=0)]/_0{_0(A | X, Z=1)} = _0[1(V=v_0) λ_0(X, 0)]/_0[λ_0(X, 0)]
Example 2 of <cit.> provides the influence function of the denomenator above, which is given by
Λ̇^*_P,0 := (a, z, x) ↦1-z/1-π_P(x){a-λ_P(x, 0)} + λ_P(x, 0) - _P[ λ_P(X, 0)] and
Λ̇_P,0 := (a, z, x) ↦1-z/1-π_P(x){a-λ_P(x, 0)} + λ_P(x, 0)
Then by an analogous derivation for the complier profile, the influence function of ψ_at can be obtained as
φ̇^*_P, at (a, z, x) := 1/_P[ λ_P(X, 0)][Λ̇^*_P,0(a, z, x)
-{Λ̇_P,0(a, z, x) 1(v=v_0)-_P[ λ_P(X, 0)1(v=v_0)]}ψ_at(P)]
where λ̇ is the uncentered influence function.
Then the (1-α)-level confidence interval can then be obtained by
ℙ_nΛ̇_P, 01(V=v_0)/ℙ_nΛ̇_P, 0± n^-1/2q_1-α/2(ℙ_nφ̇^*2_P, at)^1/2
Following the identification results given by Lemma <ref>, we define the never-taker profile ψ_at
ψ_nt(P_0) := _0[1(V=v_0) _0(1-A | X, Z=1)]/_0{_0(1-A | X, Z=1)} = _0[1(V=v_0) {1-λ_0(X, 1)}]/_0[1-λ_0(X, 1)]
Example 2 of <cit.> provides the influence function of the denomenator above and its uncentered analogue, which are given by
Λ̇^*_P,1 := (a, z, x) ↦z/π_P(x){λ_P(x, 1)-a} + 1-λ_P(x, 1) - _P[1- λ_P(X, 1)] and
Λ̇_P,1 := (a, z, x) ↦z/π_P(x){λ_P(x, 1)-a} + 1-λ_P(x, 1).
Then by an analogous derivation for the complier profile, the influence function of ψ_nt can be obtained as
φ̇^*_P, nt(a, z, x) := 1/_P[ 1-λ_P(X, 1)][Λ̇^*_P,1(a, z, x)
-{Λ̇_P,1(a, z, x)1(v=v_0)-_P[ (1-λ_P(X, 1))1(v=v_0)]}ψ_nt(P)]
where λ̇ is the uncentered influence function.
Then the (1-α)-level confidence interval can then be obtained by
ℙ_nΛ̇_P, 11(V=v_0)/ℙ_nΛ̇_P, 1± n^-1/2q_1-α/2(ℙ_nφ̇^*2_P, nt)^1/2
§ ADDITIONAL SIMULATION STUDIES
This section provides two additional numerical studies.
§.§ Scenario 2
Under the second scenario, we introduce a relatively minor form of nonlinearity. Specifically, we now use the following specification for the key models:
* π_0(X, 1) = expit(0.4 X_1 - 0.8X_2)
* λ_0^†(X, Z, U) = expit(-0.3 - 0.4X_1 - 0.14 X_2 + 1.1Z + 0.7 U)
* r(X) = - 4 X_1 + 6 {X_2 - 0.3}
* s(X) = 40 - 7X_1 - 8X_2
Under this DGP, all the main models are based on linear functions. The only form of nonlinearity is due to the logistic links in π_0 and λ_0^†. As such, we might expect TSLS and the parametric DRML IV estimator to display considerably less bias due to model misspecification compared to scenario 1. Normally, we might expect parametric methods to outperform nonparametric methods due to the bias-variance tradeoff. Here, we investigate whether the nonparametric IV estimator loses efficiency compared to parametric methods.
Figure <ref> contains the results from the simulation under scenario 2. Under this scenario, we find that both DRML methods provide nearly identical results. There is a small amount of bias for the two smallest sample sizes, and bias shrinks to essentially zero for larger sample sizes. Notably, TSLS is biased, and that bias does not shrink with larger sample sizes. When we review the results for RMSE, we find that there is no difference between the two DRML methods, and only slight differences between the DRML methods and TSLS. These results demonstrate the strength of DRML methods. More typically, we might expect slow rates of convergence for a flexible estimator, but we observe that DRML estimation methods allow for flexible fits with parametric-like efficiency.
§.§ Scenario 3
Under the third scenario, we use functional forms for the models such that the model misspecification falls somewhere between scenarios 1 and 2. Now we only include nonlinearity for the model π_0 and λ_0^†. Note that the form of nonlinearity in these two models is identical to that in scenario 1. As such, the DGP for scenario 3 is as follows:
* π_0(X, 1) = expit(0.4 X_1 - 0.8X_2 + 0.41(X_1 > 0))
* λ_0^†(X, Z, U) = expit(-0.3 - 0.4X_1 - 0.14 X_2
+ 1.1 Z - 0.55 X_1 Z- 0.7 1(X_1 > 0) + 0.7 U)
* r(X) = - 4 X_1 + 6 {X_2 - 0.3}
* s(X) = 40 - 7X_1 - 8X_2
Figure <ref> contains the results for the simulation from scenario 3. The general pattern here mirrors that of scenario 2. That is, both DRML IV estimators display less bias compared to TSLS. In terms of RMSE, all three methods have nearly identical behavior. In general, our simulation study makes a strong case for DRML IV estimation methods. That is, under various forms of model misspecification TSLS is biased by differing amounts. DRML IV methods produce estimates with smaller bias than TSLS in all three scenarios. However, under more extreme forms of model misspecification, the nonparametric DRML estimator performed the best. Critically, despite being based on flexible nonparametric estimators, its performance is never worse in terms of RMSE. As such, this allows analysts to employ flexible estimation methods without sacrificing efficiency even when parametric assumptions hold.
|
http://arxiv.org/abs/2307.04297v1 | 20230710012706 | AT 2023clx: the Faintest and Closest Optical Tidal Disruption Event Discovered in Nearby Star-forming Galaxy NGC 3799 | [
"Jiazheng Zhu",
"Ning Jiang",
"Tinggui Wang",
"Shifeng Huang",
"Zheyu Lin",
"Yibo Wang",
"Jian-Guo Wang"
] | astro-ph.HE | [
"astro-ph.HE"
] |
0000-0003-3824-9496]Jiazheng Zhu
CAS Key laboratory for Research in Galaxies and Cosmology,
Department of Astronomy, University of Science and Technology of China,
Hefei, 230026, China; [email protected], [email protected]
School of Astronomy and Space Sciences,
University of Science and Technology of China, Hefei, 230026, China
0000-0002-7152-3621]Ning Jiang
CAS Key laboratory for Research in Galaxies and Cosmology,
Department of Astronomy, University of Science and Technology of China,
Hefei, 230026, China; [email protected], [email protected]
School of Astronomy and Space Sciences,
University of Science and Technology of China, Hefei, 230026, China
0000-0002-1517-6792]Tinggui Wang
CAS Key laboratory for Research in Galaxies and Cosmology,
Department of Astronomy, University of Science and Technology of China,
Hefei, 230026, China; [email protected], [email protected]
School of Astronomy and Space Sciences,
University of Science and Technology of China, Hefei, 230026, China
0000-0001-7689-6382]Shifeng Huang
CAS Key laboratory for Research in Galaxies and Cosmology,
Department of Astronomy, University of Science and Technology of China,
Hefei, 230026, China; [email protected], [email protected]
School of Astronomy and Space Sciences,
University of Science and Technology of China, Hefei, 230026, China
0000-0003-4959-1625]Zheyu Lin
CAS Key laboratory for Research in Galaxies and Cosmology,
Department of Astronomy, University of Science and Technology of China,
Hefei, 230026, China; [email protected], [email protected]
School of Astronomy and Space Sciences,
University of Science and Technology of China, Hefei, 230026, China
0000-0003-4225-5442]Yibo Wang
CAS Key laboratory for Research in Galaxies and Cosmology,
Department of Astronomy, University of Science and Technology of China,
Hefei, 230026, China; [email protected], [email protected]
School of Astronomy and Space Sciences,
University of Science and Technology of China, Hefei, 230026, China
0000-0003-4156-3793]Jian-Guo Wang
Yunnan Observatories, Chinese Academy of Sciences, Kunming 650011, PR China
Key Laboratory for the Structure and Evolution of Celestial Objects, Yunnan Observatories, Kunming 650011, China
We report the discovery of a faint optical tidal disruption event (TDE) in the nearby star-forming galaxy NGC 3799. Identification of the TDE is based on its position at the galaxy nucleus, a light curve declining as t^-5/3, a blue continuum with an almost constant blackbody temperature of ∼12,000 K, and broad (≈15,000 km s^-1) Balmer lines and characteristic He II 4686 Å emission. The light curve of AT 2023clx peaked at an absolute magnitude of -17.16 mag in the g band and a maximum blackbody bolometric luminosity of 4.56×10^42 erg s^-1, making it the faintest TDE discovered to date. With a redshift of 0.01107 and a corresponding luminosity distance of 47.8 Mpc, it is also the closest optical TDE ever discovered to our best knowledge. Furthermore, our analysis of Swift/XRT observations of AT 2023clx yields a very tight 3σ upper limit of 9.53×10^39 erg s^-1 in the range 0.3–10 keV. AT2023clx, together with very few other faint TDEs such as AT 2020wey, prove that there are probably a large number of faint TDEs yet to be discovered at higher redshifts, which is consistent with the prediction of luminosity functions (LFs). The upcoming deeper optical time-domain surveys, such as the Legacy Survey of Space and Time (LSST) and the Wide-Field Survey Telescope (WFST) will discover more TDEs at even lower luminosities, allowing for a more precise constraint of the low-end of the LF.
§ INTRODUCTION
A tidal disruption event (TDE) is the phenomenon observed
when a star comes too close to a supermassive black hole (SMBH). The star will be tidally disrupted and produce a radiation flare peaked at the ultraviolet (UV) to soft X-ray band, which usually happens in the core of the galaxy (). Although the first TDE was detected in X-ray (), optical surveys have gradually dominated the discovery
of TDEs recently, especially since the operation of the Zwicky transient facility (). Moreover, a growing number of TDEs discovered in the infrared bands suggest that a considerable fraction of TDEs could be obscured by dust, which is missed by optical and X-ray surveys but can be revealed by their dust-reprocessed emission ().
Recently, <cit.> conducted a systematic analysis of demographics of TDEs using a sample of 33 optically selected TDEs from the ZTF survey over three years. They found an average peak of <M_g,peak>=-19.91 mag for the g-band. However, there are a few nearby TDEs that are significantly fainter, such as iPTF16fnl at 70.8 Mpc () and AT 2019qiz at 65.6 Mpc (), which belong to the spectroscopic class of H+He TDEs with Bowen fluorescence lines. <cit.> subsequently reported the faintest TDE in the ZTF sample at 119.7 Mpc, AT 2020wey, and found that these three fast-decaying H+He TDEs lacked any other common properties. The diversity indicates that a large sample is needed to pin down the nature of these faint TDEs.
In reality, faint TDEs could constitute the largest population of all TDEs, and we may be biased toward finding bright nuclear flares due to flux-limited wide-field surveys ().
In this letter, we present the discovery of the faintest and closest optical TDE so far in the nearby galaxy NGC 3799 at a redshift of 0.01107, which corresponds to a distance of 47.8 Mpc. This event was initially detected by the All Sky Automated Survey for SuperNovae (ASAS-SN; ), and it was suspected to be a TDE that could still be rising by <cit.> on February 26, 2023. We describe our follow-up observations and data reduction in Section 2, followed by the analysis of the photometric properties of AT 2023clx and the identification of it as a robust TDE candidate combined with spectral characteristics in Section 3. Finally, we briefly discuss our results and draw conclusions in Sections 4 and 5. We assume a cosmology with H_0 =70 km s^-1 Mpc^-1, Ω_m = 0.3, and Ω_Λ = 0.7.
§ OBSERVATIONS AND DATA
§.§ Ground-based Optical Photometry
We initiated optical ugri band follow-up observations of AT 2023clx with the 1.0m Las Cumbres Observatory Global Telescope network (LCOGT; ) immediately after the event was reported on the Transient Name Server on 2023-02-26 UT (). We used PanSTARRS () gri band stack images as reference images and employed HOTPANTS <cit.> for image subtraction. Prior to subtraction, we removed cosmic rays and aligned the images using Astrometry.net. After image subtraction, we performed point spread function (PSF) photometry on the difference image using the Photutils package of Astropy <cit.> for the gri data. For the u band, we performed aperture photometry with a 5 aperture and then subtracted the host galaxy magnitude (16.69±0.02 mag) with the same 5 aperture measured in the rescaled Sloan Digital Sky Survey (SDSS; ) u band image.
We tried to measure the position of AT 2023clx in the difference image of the LCO r-band image obtained on 2023-02-26 UT and the centroid of its host galaxy in the PanSTARRS r-band reference image by measuring the barycenter using the SExtractor. The offset between them was measured to be 0.21±0.17 arcsec, taking into account the uncertainty during the alignment of the image. This corresponds to a physical offset of 49±40 pc at the distance of NGC 3799. Therefore, we can conclude that AT 2023clx is consistent with the center of the galaxy at the resolution level of the LCO images, making it a potential candidate for TDE.
We were intrigued by this event because we believed that it was still in the rising stage, given that its luminosity was 2 magnitudes fainter than the average peak luminosity of optical TDEs <cit.>. However, our initial two observations indicated that it was already in decline, suggesting that it might be a rarely-discovered faint TDE. In fact, its peak luminosity in the g band (M_g= -17.16) is one of the faintest TDEs discovered thus far, comparable to that of iPTF16fnl (; M_g= -17.20).
§.§ Swift/UVOT photometry
UV images were obtained with the Neil Gehrels Swift Observatory (hereafter Swift) with the Ultra-Violet Optical Telescope (UVOT). The Swift photometry (PIs: Gomez, Huang, Leloudas, and Wevers) were measured with UVOTSOURCE task in the Heasoft package using 5 apertures after subtracting the galaxy background using Swift/UVOT images taken on July 18, 2010.
It was placed in the AB magnitude system <cit.>, adopting the revised zero points and the sensitivity of <cit.>.
§.§ Swift/XRT photometry
The X-ray Telescope (XRT) photometry was performed using XRTPIPELINE and XRTPRODUCTES. We used a circle with a radius of 47.1 as the source region and an annulus with an inner radius of 100 and an outer radius of 200 as the background. The source was X-ray faint in the observations, and we assumed an absorbed power-law spectrum with an index of Γ=1.75 <cit.> and a Galactic hydrogen density of 2.51× 10^20 cm^-2 <cit.>. We then derived the 3σ upper limit for the flux in the 0.3–10.0 keV range using WebPIMMS[<https://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3pimms/w3pimms.pl>]. In addition, we stacked the 26 event files ranging from MJD 60002.74 to 60089.23, deriving the upper limit of the mean luminosity of 9.53× 10^39 in 3σ level, with a total exposure time of 35.79 ks.
§.§ Archival photometry Data
We also collected host-subtracted light curves of AT 2023clx from public time domain surveys, including data from the Asteroid Terrestrial Impact Last Alert System (ATLAS; ), the Zwicky Transient Facility (ZTF; ) and the All Sky Automated Survey for SuperNovae (ASAS-SN, ).
The ATLAS c- and o-band light curves were obtained using the ATLAS Forced Photometry Service, which produces PSF photometry on the difference images. ATLAS has 3-4 single exposures within each epoch (typically within one day), so we binned the light curve every epoch to improve the signal-to-noise ratio (SNR). The ZTF light curves were obtained using the Lasair alert broker[Website link: https://lasair-ztf.lsst.ac.uk/] (). The ASAS-SN host-subtracted g-band light curves were obtained using the ASAS-SN Sky Patrol photometry pipeline. We also collected and binned the photometry data of ASAS-SN. Considering the depth of g band of ASAS-SN is roughly 18.5 mag (), we only use the photometry data brighter than 18 mag because the data fainter than 18 mag have larger error than our LCO and ZTF photometry.
All light curves, after correction for Galactic extinction, are shown in Figure <ref>. We assumed a <cit.> extinction law with R_V=3.1 and a Galactic extinction of E(B-V)=0.0268±0.0003 mag ().
For better comparison of the peak g-band luminosity between AT 2023clx and the ZTF TDE samples, we tried to fit the profile of the light curve profile of AT 2023clx following the method of <cit.> (see Section 3.2 for details). The rise function was poorly constrained with either Gaussian or power-law function due to only three detected points in ASAS-SN. Hence, we combined the upper limit to estimate the rise function. Shown in Figure <ref>, AT 2023clx can roughly be described by a rise in power law (index n = 0.64) and a (t-t_0)^-5/3 power-law decline (t_0= MJD 59977±2 d) with a peak g-band luminosity of logL_g= 42.31, which is the faintest TDE discovered by the ZTF survey to date compared to <cit.>.
§.§ Optical Spectra Observation and Data Reduction
Five spectra were acquired for AT 2023clx, including one from SDSS DR7, one observed by <cit.> with the SEIMEI telescope[
We collected this spectrum from TNS website: https://www.wis-tns.org/object/2023clx] on 2023-02-26 UT, one observed by the ZTF group with the Keck telescope[
ZTF group upload this spectrum to another transient by mistake: https://www.wis-tns.org/object/2018meh] on 2023-03-20 UT and two obtained by ourselves with YFOSC onboard the LiJiang 2.4m telescope () on 2023-03-03 UT and 2023-03-13 UT.
We reduced the LJT spectra according to the standard procedure for long-slit spectra with PyRAF, a Python-based package of IRAF(). All spectra are shown in Figure <ref>.
§ ANALYSIS AND RESULTS
§.§ Photometric and Spectral Analysis
First, we use the package SUPERBOL ()
to fit the spectral energy distribution (SED) of AT 2023clx with a blackbody model. The observing cadence after MJD 60040 in r, i and u bands is quite sparse. So we use third-order polynomials only to interpolate the g, c and o bands light curve in the late phase to match with the Swift epochs. The evolution of blackbody luminosity, temperature, and effective radius are shown in Figure <ref>, compared with other faint TDEs.
The optical/UV light curves of TDEs usually show a monthly rise followed by a decay with a timescale of months to years, sometimes following a t^-5/3 power-law decline (). In the case of AT 2023clx, it exhibited a similar decline behavior that was consistent with a t^-5/3 power-law decline in both the g band and the bolometric light curves (see Figures <ref> and <ref>). The blackbody temperature was approximately 11,000–13,000 K, which slowly decreased at the peak and was followed by a slow return to 13,000 K. The blackbody radius R_BB was approximately 4.61 × 10^14 cm at the peak, followed by a decline. It is worth noting that AT2023clx has the lowest temperature among these faint TDEs, albeit with a medium blackbody radius. As a result, the maximum blackbody luminosity of AT 2023clx is only (4.56±0.53) × 10^42 erg s^-1, making it even fainter than previous low-luminosity TDEs (e.g., iPTF16fnl; , AT 2019qiz; , AT 2020wey; ). Furthermore, these faint TDEs (e.g., AT 2020wey; ) all exhibit fast declining light curves that are much steeper than the canonical t^-5/3. In the case of AT 2023clx, the decline appears to be less extreme than that of this subclass of TDEs. However, the limited number of samples at present cannot reveal their true statistical characteristics.
We then compared the optical spectra of these low-luminosity TDEs in Figure <ref>. Clearly, all these spectra near the peak display a blue continuum with broad H_α and H_β emission, as well as the typical TDE emission line He II 4686 Å. The broad H_α is blended with He I 6678 Å, and broad He I 5876 Å line was also evident in the +26 d Keck spectrum. The full width at half maximum (FWHM) of the broad Balmer lines is ≈15,000 km s^-1. The H_α profile is asymmetric, with a blueshifted peak at +4.3 d and a redshifted peak at +26 d (see Figure <ref>). A sharp peak on the blue side coincides in wavelength with [O I] 6300 Å, which would indicate the presence of low ionization, low density, and slow-moving gas.
The red side may include some contribution from He I 6678 Å, but this is likely not a major contributor, as we see only a very weak He I 5876 Å, similar to AT 2019qiz (). <cit.> eventually attributed the evolution of such H_α profiles of AT 2019qiz to an outflow.
In addition, as we have seen the asymmetric broad H_α profile, the N III 4640 Å also could be blended with He II and H_β. Furthermore, most of the spectra in Figure <ref> do not cover the range of N III 4100 Å, and it is hard to confirm whether the weak feature at +26 d is real or not. Thus, the N III 4100 Å can not be ruled out either.
However, the lack of low-ionization metal features (e.g., oxygen and calcium) in spectra near the peak almost excludes the typical Type-II supernovae scenario (). Type-IIn supernovae also show strong broad Balmer lines, but sometimes with high ionization lines of intermediate width that can be regarded as a result of CSM interaction (e.g., SN 2005ip, SN 2006jd and SN 2010jl; ). If the CSM interaction of Type-IIn is relatively weak compared to the above three, the Fe II and Ca II P-Cygni lines appear again in their spectra (). Furthermore, the optical light curves of Type-IIn are relatively long-lived compared to AT 2023clx and normal supernovae do not stay at a temperature as high as 13,000 K for months.
Therefore, AT 2023clx is consistent with a faint TDE scenario in both aspects of light curves and spectroscopic properties. Based on this, we classify AT 2023clx as a robust faint hydrogen and helium (H+He) optical TDE.
§.§ Host-galaxy Properties
NGC 3799, the host of AT 2023clx, is a well-resolved face-on spiral galaxy (see top panels of Figure <ref>). The pre-outburst SDSS spectra centered on the galaxy center have placed it in the locus of low-ionization nuclear emission-line region region (LINER) in the Baldwin-Phillips-Terlevich (BPT) diagram () based on its narrow-line ratios, indicating weak active galactic nuclear (AGN) activity albeit without broad emission lines.
<cit.> performed a spectral energy distribution (SED) analysis of a sample of 189 nearby galaxies, including NGC 3799. Their fitting considered the AGN emission and obtained a stellar mass of log(M_*/M_⊙) = 9.87±0.02 and a star formation rate (SFR) of logSFR = -0.09±0.02 for NGC 3799. Consequently, it is located on the main sequence of star-forming galaxies (SF) (; see Figure <ref>). AT 2023clx thus belongs to a TDE in a typical SF galaxy, whereas known optical TDEs show a preference for post-starburst (or green valley) galaxies (). Additionally, the fractional AGN contribution of NGC 3799 is indeed small (f_AGN0.2) according to their fitting. We emphasize that the overall star formation activity does not conflict with its LINER classification in the BPT diagram since the central regions of the SF galaxies are commonly found to be quenched (). Using the empirical relation between and the total galaxy stellar mass in the local universe ():
log(M_BH/M_⊙)=α + β log(M_stellar/10^11M_⊙)
The central in NGC 3799 is estimated to be 10^6.26±0.28M_⊙, using α=7.45±0.08 and β=1.05±0.11.
§ DISCUSSION
Although only a handful of TDEs have been discovered to be faint (M_g>-18), the luminosity functions (LFs) of optical TDEs suggest that their real number should be large, as all of these LFs show a rising trend toward the low end (). However, an accurate LF profile requires more precise constraints, particularly when discovering more faint TDEs.
As one of the faintest TDEs discovered by the ZTF survey, AT 2023clx could provide the first data point at L_g10^42.4 erg s^-1. We also used the “1/𝒱max" method described in <cit.> to estimate the volumetric rate and considered the latest TDE selection criteria from the ZTF survey (). The 𝒱max is defined as:
𝒱max≡ V(zmax) A_survey×τ_survey
For the survey duration (τ_survey), we set a starting date of 2018-10-1 UT and an end date of 2023-5-1 UT. Consistent with <cit.>, the effective survey area is set to A≈15000, dg^2. We binned the two faintest ZTF TDEs (AT 2020wey and AT 2023clx) to estimate the volumetric rate in the faint end (L_g10^42.5 erg s^-1) and compared it with known optical TDE LFs (see Figure <ref>). We found that our result is about a factor of two higher than that measured by <cit.>, which is easily understood since they had only AT 2020wey when they constructed the LF. The data, although with a large uncertainty, seem to be well consistent with both the double power-law fitting measured by <cit.> and the single power-law fitting measured by <cit.>. However, the volumetric rate at the low end is much lower than that of <cit.>. The general overestimation in <cit.> is probably due to sources that only have post-peak light curves in their sample.().
Furthermore, we used the double power-law TDE luminosity functions measured by <cit.>, which is consistent with our result, to estimate the proportion of faint TDE. Events that are fainter than or equally faint with AT 2019qiz constitute ∼74% in the currently observed g-band peak luminosity space (L_g∼ 42.3-44.7). This result is greater than the estimation obtained by <cit.> (roughly 50–60%). Again, we conclude that faint TDEs are not rare by nature, but constitute a large portion of the entire population, as mentioned by <cit.>.
The AT 2023clx might reveal that the contribution of the faint end may be even greater than previously thought.
It is worth noting that the distance used in this work is taken as a simple cosmological deduction from the redshift. The precise distance measurement for nearby galaxies is challenging, and thus previous works on faint nearby TDEs all adopted the same approach as us (e.g., 66.6 Mpc for iPTF16fnl; , 65.6 Mpc for AT 2019qiz; ).
However, at these short distances, peculiar velocities and some redshift independent distances are not negligible, which might significantly affect the luminosity estimate of AT 2023clx significantly. <cit.> performed a flow field correction on the nearby secondary distance indicators, including three attractors: the Local Supercluster, the Great Attractor and the Shapley Supercluster. The corrected recessional velocity of NGC 3799 in this way is 3823±31 km/s and the corrected distance is 54.6±4.6 Mpc, resulting in a peak g band magnitude of -17.45±0.12 mag and a peak blackbody luminosity of L_bb=(5.95±1.23) × 10^42 erg s^-1 for AT2023 clx. Although the luminosity has increased by ∼30%, it remains lower than iPTF16fnl (1.0±0.15× 10^43 erg s^-1; ) and AT 2020wey (8.74±0.69× 10^42 erg s^-1; ).
We carefully examined other distance estimates for NGC3799 from the NASA/IPAC Extragalactic Database (NED), such as a 3K cosmic microwave background (CMB) correction distance of 52.3±3.8 Mpc and a Galactocentric (GSR) distance of 46.4±3.4 Mpc. They are all less than the above value of 54.6±4.6 Mpc and thus lead to a lower correction of luminosity. Based on this, we confidently conclude that AT 2023clx is the faintest optical TDE observed to date.
§ CONCLUSION
In this work, we report the discovery of a new faint TDE in NGC 3799, a main-sequence star-forming galaxy located at a distance of only ∼50 Mpc. It holds the lowest peak blackbody luminosity and the closest distance among all optical TDEs discovered up to now. The main properties of AT 2023clx discovered by us are summarized below:
∙ The peak blackbody luminosity L_bb=(4.56±0.53) × 10^42 erg s^-1 is lower than that of all other low-luminosity TDEs, although its absolute magnitude in the g band (M_g=-19.16) is comparable to iPTF16fnl. It is also the closest optical TDE with a redshift of 0.01107 or a luminosity distance of 47.8 Mpc.
∙ Both the optical/UV light curves and the bolometric light curve show a t^-5/3 power-law decay after the peak, which is not as fast as that of other faint TDEs discovered before.
∙ AT 2023clx was not detectable in X-rays by Swift/XRT in any single observations or even in the stacked image. This yields a very tight 3σ upper limit of 9.53×10^39 erg s^-1 in the range of 0.3–10 keV.
∙ The spectra taken around the optical peak show a strong blue continuum and broad Balmer lines blended with helium features (FWHM ≈15,000 km s^-1), which is reminiscent of other faint TDEs.
AT 2023clx is the second optical TDE with a peak g band luminosity of L_g10^42.5 erg s^-1 in the ZTF survey. The addition of AT 2023clx increases the volumetric rate at the extreme faint end by a factor of two compared to that given by <cit.>. This finding further demonstrates that the luminosity function (LF) is continuously rising towards the low end, and there are likely many more faint TDEs waiting to be discovered. The rarity of reported faint TDEs is simply due to selection bias, as we are biased towards finding bright events due to the flux-limited surveys, as mentioned by <cit.>. The upcoming deeper surveys, such as the Legacy Survey of Space and Time (LSST; ) and the Wide-Field Survey Telescope (WFST; ), will undoubtedly find more faint TDEs, which will help us measure their rate and constrain the lower end of the TDE LF more precisely.
We thank the anonymous referee for an extremely quick response and for providing valuable comments, which help to improve the manuscript. This work is supported by the SKA Fast Radio Burst and High-Energy Transients Project (2022SKA0130102), the National Natural Science Foundation of China (grants 11833007, 12073025, 12192221), and the 111 Project for "Observational and Theoretical Research on Dark Matter and Dark Energy" (B23042). We acknowledge the support of the Cyrus Chun Ying Tang Foundations. This research uses data obtained
through the Telescope Access Program (TAP), which has been
funded by the TAP member institutes. The authors acknowledge the support of the Lijiang 2.4m telescope staff. Funding for the telescope has been provided by the Chinese Academy of Sciences and the People's Government of Yunnan Province”. The ZTF forced photometry service was funded under the Heising-Simons Foundation grant #12540303 (PI: Graham).
[Arcavi et al.(2014)]Arcavi2014 Arcavi, I., Gal-Yam, A., Sullivan, M., et al. 2014, , 793, 38. doi:10.1088/0004-637X/793/1/38
[Astropy Collaboration et al.(2022)]Astropy Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022, , 935, 167
[Bade et al.(1996)]Bade1996 Bade, N., Komossa, S., & Dahlem, M. 1996, , 309, L35
[Baldwin et al.(1981)]Baldwin1981 Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, , 93, 5
[Bellm et al.(2019)]Bellm2019 Bellm, E. C., Kulkarni, S. R., Graham, M. J., et al. 2019, , 131, 018002
[Blagorodnova et al.(2017)]Blagorodnova2017 Blagorodnova, N., Gezari, S., Hung, T., et al. 2017, , 844, 46
[Becker(2015)]Becker2015 Becker, A. 2015, Astrophysics Source Code Library. ascl:1504.004
[Breeveld et al.(2011)]Breeveld2011 Breeveld, A. A., Landsman, W., Holland, S. T., et al. 2011, Gamma Ray Bursts 2010, 1358, 373
[Brown et al.(2013)]LCOGT Brown, T. M., Baliber, N., Bianco, F. B., et al. 2013, , 125, 1031
[Brown et al.(2018)]Brown2018 Brown, J. S., Kochanek, C. S., Holoien, T. W.-S., et al. 2018, , 473, 1130
[Cardelli et al.(1989)]Cardelli1989 Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, , 345, 245
[Chang et al.(2015)]Chang2015 Chang, Y.-Y., van der Wel, A., da Cunha, E., et al. 2015, , 219, 8
[Charalampopoulos et al.(2023)]Charalampopoulos2023 Charalampopoulos, P., Pursiainen, M., Leloudas, G., et al. 2023, , 673, A95
[de Jaeger et al.(2022)]deJ2022 de Jaeger, T., Shappee, B. J., Kochanek, C. S., et al. 2022, , 509, 3427. doi:10.1093/mnras/stab3141
[Ellison et al.(2018)]Ellison2018 Ellison, S. L., Sánchez, S. F., Ibarra-Medel, H., et al. 2018, , 474, 2039
[Fan et al.(2015)]Fan2015-2m4 Fan, Y.-F., Bai, J.-M., Zhang, J.-J., et al. 2015, Research in Astronomy and Astrophysics, 15, 918
[Filippenko (1997)]Filippenko97 Filippenko, A. V. 1997, ARA&A, 35, 309
[Flewelling et al.(2020)]PS1 Flewelling, H. A., Magnier, E. A., Chambers, K. C., et al. 2020, , 251, 7
[Fransson et al.(2014)]SN2010jl Fransson, C., Ergon, M., Challis, P. J., et al. 2014, , 797, 118
[French, Arcavi & Zabludoff (2016)]French2016 French, K. D., Arcavi, I., & Zabludoff, A. 2016, , 818, L21. doi:10.3847/2041-8205/818/1/L21
[French et al.(2020)]French2020 French, K. D., Wevers, T., Law-Smith, J., et al. 2020, , 216, 32. doi:10.1007/s11214-020-00657-y
[Gezari(2021)]Gezari2021 Gezari, S. 2021, , 59, 21
[Gunn et al.(2006)]Gunn2006 Gunn, J. E., Siegmund, W. A., Mannery, E. J., et al. 2006, , 131, 2332
[Hammerstein et al.(2021)]Hammerstein2021 Hammerstein, E., Gezari, S., van Velzen, S., et al. 2021, , 908, L20
[HI4PI Collaboration et al.(2016)]HI4PI2016 HI4PI Collaboration, Ben Bekhti, N., Flöer, L., et al. 2016, , 594, A116
[Ivezić et al.(2019)]Ivezic2019 Ivezić, Ž., Kahn, S. M., Tyson, J. A., et al. 2019, , 873, 111
[Jiang et al.(2021)]Jiang2021 Jiang, N., Wang, T., Dou, L., et al. 2021, , 252, 32
[Lin et al.(2022a)]Lin2022a Lin, Z., Jiang, N., & Kong, X. 2022a, , 513, 2422
[Lin et al.(2022b)]Lin2022b Lin, Z., Jiang, N., Kong, X., et al. 2022b, , 939, L33
[Kochanek et al.(2017)]Kochanek2017 Kochanek, C. S., Shappee, B. J., Stanek, K. Z., et al. 2017, , 129, 104502
[Masci et al.(2019)]Masci2019 Masci, F. J., Laher, R. R., Rusholme, B., et al. 2019, , 131, 018003
[Mattila et al.(2018)]Mattila2018 Mattila, S., Pérez-Torres, M., Efstathiou, A., et al. 2018, Science, 361, 482
[Mould et al.(2000)]Mould2000 Mould, J. R., Huchra, J. P., Freedman, W. L., et al. 2000, , 529, 786. doi:10.1086/308304
[Nicholl(2018)]Nicholl2018 Nicholl, M. 2018, Research Notes of the American Astronomical Society, 2, 230
[Nicholl et al.(2020)]Nicholl2020 Nicholl, M., Wevers, T., Oates, S. R., et al. 2020, , 499, 482
[Oke & Gunn (1983)]OkeGunn83 Oke, J. B., & Gunn, J. E. 1983, ApJ, 266, 713
[Onori et al.(2019)]Onori2019 Onori, F., Cannizzaro, G., Jonker, P. G., et al. 2019, , 489, 1463
[Ramos Padilla et al.(2020)]Ramos2020 Ramos Padilla, A. F., Ashby, M. L. N., Smith, H. A., et al. 2020, , 499, 4325
[Rees(1988)]Rees1988 Rees, M. J. 1988, , 333, 523
[Reines & Volonteri(2015)]Reines2015 Reines, A. E. & Volonteri, M. 2015, , 813, 82
[Ricci et al.(2017)]ricci2017 Ricci, C., Trakhtenbrot, B., Koss, M. J., et al. 2017, , 233, 17
[Schlafly & Finkbeiner(2011)]Schlafly2011 Schlafly, E. F. & Finkbeiner, D. P. 2011, , 737, 103
[Shappee et al.(2014)]Shappee2014 Shappee, B. J., Prieto, J. L., Grupe, D., et al. 2014, , 788, 48
[Smith et al.(2019)]Smith2019 Smith, K. W., Williams, R. D., Young, D. R., et al. 2019, Research Notes of the American Astronomical Society, 3, 26
[Smith et al.(2020)]Smith2020 Smith, K. W., Smartt, S. J., Young, D. R., et al. 2020, , 132, 085002
[Stritzinger et al.(2012)]SN2005ip SN2006jd Stritzinger, M., Taddia, F., Fransson, C., et al. 2012, , 756, 173
[Tacchella et al.(2015)]Tacchella2015 Tacchella, S., Carollo, C. M., Renzini, A., et al. 2015, Science, 348, 314
[Taguchi et al.(2023)]Taguchi2023 Taguchi, K., Uno, K., Nagao, T., et al. 2023, Transient Name Server Classification Report, 2023-438
[Taddia et al.(2013)]Taddia2013 Taddia, F., Stritzinger, M. D., Sollerman, J., et al. 2013, , 555, A10
[Tody(1986)]Tody1986 Tody, D. 1986, , 627, 733
[Tody(1993)]Tody1993 Tody, D. 1993, Astronomical Data Analysis Software and Systems II, 52, 173
[Tonry et al.(2018)]Tonry2018 Tonry, J. L., Denneau, L., Heinze, A. N., et al. 2018, , 130, 064505
[Veilleux & Osterbrock(1987)]Veilleux1987 Veilleux, S. & Osterbrock, D. E. 1987, , 63, 295
[van Velzen(2018)]Velzen2018 van Velzen, S. 2018, , 852, 72
[van Velzen et al.(2021)]Velzen2021 van Velzen, S., Gezari, S., Hammerstein, E., et al. 2021, , 908, 4
[Wang et al.(2019)]Wang2019-2m4 Wang, C.-J., Bai, J.-M., Fan, Y.-F., et al. 2019, Research in Astronomy and Astrophysics, 19, 149
[WFST Collaboration et al.(2023)]wfst2023 WFST Collaboration, Wang, T., Liu, G., et al. 2023, arXiv:2306.07590
[Yao et al.(2023)]Yao2023 Yao, Y., Ravi, V., Gezari, S., et al. 2023, arXiv:2303.06523
|
http://arxiv.org/abs/2307.07391v1 | 20230714150007 | On the irrationality of moduli spaces of projective hyperkähler manifolds | [
"Daniele Agostini",
"Ignacio Barros",
"Kuan-Wen Lai"
] | math.AG | [
"math.AG"
] |
On the irrationality of moduli spaces of projective hyperkähler manifolds
Daniele AgostiniIgnacio BarrosKuan-Wen Lai
=========================================================================
titlepage
The aim of this paper is to estimate the irrationality of moduli spaces of hyperkähler manifolds of types K3^[n], Kum_n, OG6, and OG10. We prove that the degrees of irrationality of these moduli spaces are bounded from above by a universal polynomial in the dimension and degree of the manifolds they parametrize. We also give a polynomial bound for the degrees of irrationality of moduli spaces of (1,d)-polarized abelian surfaces.
§ INTRODUCTION
Perhaps the coarsest invariant measuring birational complexity is the Kodaira dimension and the computation of this invariant in the moduli context has been a guiding question in the past decades. A much finer, but also harder to compute, collection of invariants measuring birational complexity go by the name of measures of irrationality. One of them, called the degree of irrationality, is defined for a variety X as the minimal possible degree of a dominant rational map
X^(X).
This invariant, denoted as (X), was first introduced in <cit.> and received a revived attention after <cit.>. Notice that irr(X)=1 if and only if X is rational. In this sense, irr(X) measures how far is X from being rational. Deciding whether a variety X is rational is a famously hard problem in algebraic geometry, suggesting that the first approach to study (X) is to find bounds.
In the moduli context, Donagi proposed to find bounds on measures of irrationality for classical moduli spaces such as those of curves ℳ_g and principally polarized abelian varieties 𝒜_g; see <cit.>*Problem 4.4. These spaces are of general type when g is large enough Tai82,HM82,Mum83, so their degrees of irrationality are at least 2 for large g. To the best of our knowledge, there is no known upper bound on their degrees of irrationality.
In this paper, we continue our study on the irrationality of various modular varieties initiated in <cit.>. Our main objects of study are moduli spaces of (1,d)-polarized abelian surfaces 𝒜_(1,d) and moduli spaces of projective hyperkähler manifolds ℳ_Λ,2d^γ of known deformation types. Similar to moduli spaces of curves and principally polarized abelian varieties, components of 𝒜_(1,d) and ℳ_Λ,2d^γ become of general type when certain invariants grow; see OGr89,GH96,San97,Erd04 for the former and GHS07,GHS10,GHS11,Ma18,BBBF23 for the latter. Our first main result gives bounds for degrees of irrationality of 𝒜_(1,d).
For any ε>0 there exists a constant C_ε>0 independent of d such that
irr(𝒜_(1,d))≤ C_ε· d^4+ε.
Fix a,b,c∈ℤ which satisfy 4ac-b^2 < 0. Suppose that d is squarefree and of the form d=aX^2-bXY+cY^2 with (X,Y) = 1. Then for any ε>0, there exists a constant C_ε = C_ε(a,b,c) > 0 independent of d such that
(𝒜_(1,d))
≤ C_ε· d^2+ε.
Our second main result concerns moduli spaces of projective hyperkähler manifolds. The first examples of such manifolds, introduced in <cit.>, are Hilbert schemes of points on K3 surfaces and generalized Kummer varieties. A hyperkähler manifold X is said to be of K3^[n]-type or Kum_n-type if it is deformation equivalent respectively to the former or the latter. In addition, there are two sporadic examples in dimensions 10 and 6 constructed in OGr99, OGr03. In this case, X is of OG10 and OG6-type respectively. Up to now, every known projective hyperkähler manifold is one of these four types.
For a projective hyperkähler manifold X, its second cohomology group H^2(X,ℤ) carries a bilinear form (·,·) which turns it into a lattice known as the Beauville–Bogomolov–Fujiki lattice. If the manifold X is of one of the known deformation types, then this lattice is isomorphic to one of the lattices Λ_K3^[n], Λ_Kum_n, Λ_OG10, Λ_OG6. If Λ is one of these lattices, we denote by ℳ_Λ,2d^γ the moduli space of pairs (X,H), where X is a projective hyperkähler manifold of dimension 2n with H^2(X,)≅Λ, and H is a primitive polarization on X of degree (c_1(H), c_1(H)) = 2d and divisibility γ (recall that the divisibility of x∈Λ is the positive generator of the ideal (x,Λ)⊂ℤ). The dimension 2n can be read off Λ. The existence of such moduli spaces follows from <cit.> and, due to the Torelli theorem <cit.> (see also <cit.>), their irreducible components are birational to orthogonal modular varieties. This lays the foundation for our second main result about their degrees of irrationality.
There exists a constant C>0 such that, for every irreducible component Y⊂ℳ_Λ,2d^γ, it holds that
(Y)≤ C· (n· d)^19.
If we consider only Kum_n-type hyperkähler manifolds, then this bound can be refined as
(Y)≤ C·(n· d)^11.
Furthermore, for any ε>0, there exists a constant C_ε such that for every irreducible component Y⊂ℳ_Λ,2d^γ, it holds that
* (Y) ≤ C_ε· d^14+ε if we consider only OG10-type hyperkähler manifolds,
* (Y) ≤ C_ε· d^6+ε if we consider only OG6-type hyperkähler manifolds.
For hyperkähler manifolds of types K3^[n] or Kum_n, there are special series of n and d for which one can considerably improve the bound; see Theorems <ref> and <ref>.
Our approach is the same as in <cit.> but with several additional challenges. First of all, the Torelli theorem states that every irreducible component Y⊂ℳ_Λ,2d^γ admits an open embedding into an orthogonal modular variety associated with an even lattice M of signature (2,m). Hence the degree of irrationality of Y coincides with that of Ω(M)/Γ, where Ω(M) is the period domain of M and Γ⊂^+(M) is an arithmetic group.
Here M and Γ depend on the deformation type, the degree, the divisibility, as well as on the irreducible component Y. In each setting, this gives a collection of lattices Λ_n,d of signature (2,m) and arithmetic groups Γ_n,d⊂^+(Λ_n,d).
Assume that there exists an even lattice Λ_# of signature (2,m') independent of n,d and embeddings Λ_n,d↪Λ_# for all n,d. Assume further that each Γ_n,d is extendable, that is, each g∈Γ_n,d can be extended as an isometry g_#∈^+(Λ_#) preserving Λ_n,d whose restriction to Λ_n,d recovers g. These assumptions induce morphisms of quasiprojective varieties
𝒫_n,dΩ(Λ_n,d)/Γ_n,df_n,d⟶Ω(Λ_#)/^+(Λ_#) =: 𝒫^+_Λ_#
of finite degree onto their images Z_n,d⊆𝒫^+_Λ_#.
In particular, there is the immediate inequality
(𝒫_n,d)≤(f_n,d)·(Z_n,d).
The cycles [Z_n,d]
are examples of special cycles (also referred to as Kudla cycles). They can be arranged in a generating series which turns out to be the Fourier expansion of a modular form. More precisely, if we fix an embedding
𝒫^+_Λ_#↪^N
and let Z_n,d be the closure of Z_n,d in ℙ^N, then Kudla's modularity conjecture Kud97, Kud04, proved in Kud04,Zha09,BW15, implies that the integers (Z_n,d) are coefficients of the Fourier expansion of a Siegel modular form of weight 1/2rk(Λ_#). By taking a projection onto a general linear subspace ^(Z_n,d)⊂^N, one can conclude that (Z_n,d)≤(Z_n,d) and this can be estimated via
standard bounds on the growth of coefficients of Siegel modular forms.
There are two main challenges: the first one is to find an appropriate Λ_# together with embeddings Λ_n,d↪Λ_# such that the Γ_n,d are extendable. The second one is to bound (f_n,d). The general strategy was developed in <cit.> and the bulk of the paper is devoted to overcoming these two challenges.
§.§ Outline
This paper is organized as follows. In Section <ref>, we set up notations and give bounds on degrees of irrationality for special cycles. Then we establish bounds on degrees of maps onto their images between orthogonal modular varieties induced by lattice embeddings. In Section <ref>, we apply the results in the previous section to study the irrationality of moduli spaces of projective hyperkähler varieties of known deformation types. Finally in Section <ref>, we study the irrationality of the moduli space of (1,d)-polarized abelian surfaces and also revisit the case of K3 surfaces.
§.§ Acknowledgements
The authors would like to thank Pietro Beri, Emma Brakkee, Simon Brandhorst, Nathan Chen, Laure Flapan, Sam Grushevsky, and Klaus Hulek for helpful comments and stimulating conversations. We would like to thank Giacomo Mezzedimi for his idea in the HK of K3^[n]-type of divisibility two case. I.B. is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – SFB-TRR 358/1 2023 – 491392403. K.-W. L. is supported by the ERC Synergy Grant HyperK (ID: 854361).
§ PERIOD SPACES, SPECIAL CYCLES, AND IRRATIONALITY
The aim of this section is to introduce some upper bounds on the degrees of irrationality of period spaces to be needed in later sections. The lemmas and notations established along the way will also be used later.
§.§ Degrees of irrationality of special cycles
Let us start by reviewing the main results in <cit.> about degrees of irrationality of special cycles on orthogonal Shimura varieties. Let M be an even lattice of signature (2,n) for some n>0 and identify the dual lattice M^∨(M,) as the space of vectors v∈ M⊗_ℤ satisfying (v, M)⊂. The discriminant group of M will be denoted as D(M) M^∨/ M; recall that its order is equal to the absolute value of the discriminant disc(M). Let us further define (M) to be the group of isometries of M. Then the isometries which preserve the orientation of one (and thus all) positive 2-plane in M⊗ form an index two subgroup ^+(M)⊂(M).
The period domain Ω(M) is defined as one of the two components of
{
[w]∈( M⊗)
|
(w,w)=0, (w,w)>0
}.
We consider this together with natural action of a finite index a subgroup Γ⊂^+(M). Then, the period space
_M(Γ)
Ω(M)/Γ
is an orthogonal Shimura variety that is quasi-projective, cf. <cit.>. More precisely, the restriction of _(M⊗)(-1) to Ω(M) descends to a line bundle ℒ on 𝒫_M(Γ) such that the space H^0(ℒ^⊗ k) can be identified with the space Mod_k(Γ,1) of modular forms for Γ of weight k and trivial character. The projective model of 𝒫_M(Γ) is then given by
𝒫_M(Γ)^BB
= Proj(
⊕_k≥ 0H^0(ℒ^⊗ k)
)
= Proj(
⊕_k≥ 0Mod_k(Γ,1)
).
It is normal and the complement of 𝒫_M(Γ) has dimension at most one.
For every t∈_≥ 0 and γ∈ D(M), we consider the formal sum of hyperplane sections
∑_1/2(v,v)=t, v≡γv^⊥ ⊂ Ω(M)
where the sum runs through all v∈ M^∨ and
v^⊥{
[w]∈Ω( M)
|
(w,v)=0
}.
Under the map Ω(M)⟶_M(Γ), the formal sum descends to a Cartier divisor Y_t,γ⊂_M(Γ) called a Heeger divisor. Let us fix a projective embedding _M(Γ)^BB⊂ and let Y_t,γ⊂_M(Γ)^BB be the closure of Y_t,γ. By a fundamental result of Borcherds Bor99, the degrees of Y_t,γ in the projective space form coefficients of a modular form. Since the degree of irrationality of a variety is at most the degree of the variety under a projective embedding, the same argument as in the proof of <cit.>*Theorem 2.6 yields:
Assume n≥ 3. Then there exists a constant C>0 such that for all t∈_>0 and γ∈ D(M), it holds that
irr(Y_t,γ)
≤ C· t^n/2.
As a higher codimensional analogue, for each r-tuple
v = (
v_1,…,v_r
)∈(
M^∨)^⊕ r,
one can consider the linear subspace ⟨v⟩⊂(M⊗) spanned by v_1,…,v_r and the moment matrix Q(v) = 1/2((v_i,v_j)). Fix a semi-positive symmetric matrix T∈Sym_r()_≥ 0 and an r-tuple of classes in the discriminant group γ∈ D( M)^⊕ r. Then the formal sum
∑_Q(v)=T, v≡γ⟨v⟩^⊥⊂Ω(M),
which runs through all
v∈(
M^∨)^⊕ r,
descends to a cycle Z_T,γ⊂_M(Γ). Borcherds' result on Heegner divisors has a generalization known as Kudla's modularity conjecture, which is proved in a series of works Kud97,Kud04,Zha09,BW15. Using this result, we obtained in <cit.>*Theorem 6.2 the following bound.
Assume that 1≤ r≤ n-2. Then there exists a constant C>0 such that for all T∈Sym_r(ℚ)_>0 and γ∈ D( M)^⊕ r, it holds that
irr(Z_T,γ)≤ C·det(T)^1+n/2.
§.§ Morphisms induced by lattice embeddings
Let Λ and Λ_# be even lattices of signatures (2,m) and (2,n) with 3≤ m≤ n. Suppose there is an embedding Λ↪Λ_#, not necessarily primitive. We say that a finite index subgroup Γ⊂^+(Λ) is extendable with respect to this embedding if for every g∈Γ, there exists g_#∈^+(Λ_#) such that g_#(Λ)⊂Λ and g_#|_Λ = g. In this setting, there is a natural set-theoretical map with finite fibers
_Λ(Γ)
=
Ω(Λ)/Γ[r] ^+_Λ_#Ω(Λ_#)/^+(Λ_#).
This is actually a morphism of varieties. We include a proof for completeness.
The map (<ref>) is a morphism of algebraic varieties.
It is sufficient to show that a modular form F on Ω(Λ_#) of weight k and trivial character for ^+(Λ_#) restricts to a modular form on Ω(Λ) of the same weight and character for Γ. Recall that such an F is a holomorphic function on the affine cone Cone(Ω(Λ_#)) such that, for all Z∈Cone(Ω(Λ_#)), t∈^×, and g_#∈^+(Λ_#), we have
F(tZ) = t^-k F(Z)
and
F(g_#(Z)) = F(Z).
Now consider F as a holomorphic function on Cone(Ω(Λ)). Let Z∈Cone(Ω(Λ)), t∈^×, and g∈Γ. Note that F(tZ) = t^-k F(Z) as before. On the other hand, there exists g'∈^+(Λ_#) such that g'|_Λ = g; thus F(g(Z)) = F(g'(Z)) = F(Z). Therefore, F corresponds to a modular form on Ω(Λ) of the same weight and character for Γ, as desired.
In the following, we will bound the degree of (<ref>) onto its image. To do so we consider the stable orthogonal group
(Λ){
g∈(Λ)
|
g acts trivially on D(Λ)
}.
and define
^+(Λ)^+(Λ)∩(Λ).
Notice that this is a finite index subgroup of ^+(Λ).
There exists an injection
^+(Λ)
↪^+(Λ_#)
given by extending g∈^+(Λ) to Λ_# as the identity on Λ^⊥Λ_#. In particular, the group ^+(Λ) is extendable with respect to the embedding Λ↪Λ_#.
By the proof of <cit.>*Lemma 7.1, one can embed ^+(Λ) into (Λ_#) by extending g∈^+(Λ) to Λ_# as the identity on Λ^⊥Λ_#. Note that a positive 2-plane in Λ⊗ℝ corresponds to a positive 2-plane in Λ_#⊗ℝ. Hence the extension of g is orientation-preserving, that is, it lies in ^+(Λ_#). This completes the proof.
By Lemma <ref>, there exists a morphism with finite fibers
^+_ΛΩ(Λ)/^+(Λ)
[r] ^+_Λ_#Ω(Λ_#)/^+(Λ_#).
Our first step is to bound the degree of this map onto its image under the assumption that the embedding Λ↪Λ_# is primitive. Recall that the points of Ω(Λ) correspond to Hodge structures of K3 type (cf. <cit.>*Definition 3.2.3) on the lattice Λ. For each [v]∈Ω(Λ), the transcendental lattice T(v)⊂Λ is the minimal substructure of the Hodge structure induced by [v] with (T(v)⊗)^2,0 = v.
For a very general [v]∈Ω(Λ), we have T(v) = Λ.
For a very general [v]∈Ω(Λ), the hyperplane section v^⊥⊂Λ⊗ contains no element from Λ because Λ consists of countably many points. In this situation, the Hodge structure on Λ determined by [v] satisfies Λ^1,1 = {0}, whence T(v) = (Λ^1,1)^⊥ = Λ.
The following lemma shows that transcendental lattices are unchanged under primitive embeddings of lattices.
Consider a primitive embedding ιΛ↪Λ_#. For every [v]∈Ω(Λ), we have
ι(T(v)) = T(ι(v)).
That is, the transcendental lattice on Λ determined by [v] is mapped isomorphically onto the transcendental lattice on Λ_# determined by [ι(v)].
The Hodge substructure ι(T(v))⊂Λ_# has (2,0)-part spanned by ι(v). This implies that
ι(T(v)) ⊃ T(ι(v))
due to the minimality of T(ι(v)). Now, T(ι(v)) appears as a Hodge substructure of ι(Λ) with (2,0)-part spanned by ι(v), so the containment is an equality due to the minimality of ι(T(v)).
The next lemma gives a slight extension of <cit.>*Lemma 4.3.
Suppose that [v_1],[v_2]∈Ω(Λ) have the same image under (<ref>). Then there exists g∈^+(Λ_#) such that there is an identification g(T(v_1)) = T(v_2) of Hodge structures.
By Lemma <ref>, we can view T(v_1) (resp. T(v_2)) as the transcendental lattice of the Hodge structure on Λ_# defined by v_1 (resp. v_2). By hypothesis, there exists g∈^+(Λ_#) such that g([v_1]) = [v_2], so it induces an isomorphism between the Hodge structures on Λ_# defined by [v_1] and [v_2]. Thus g(T(v_1)) = T(v_2) as they are minimal Hodge substructures associated to g([v_1]) = [v_2].
Suppose that [v_1],[v_2]∈Ω(Λ) are very general points such that there are identities T(v_1) = T(v_2) = Λ as lattices, and assume that they are mapped to the same point under (<ref>). Then there exists g∈^+(Λ) such that g([v_1])=[v_2].
By Lemma <ref>, there exists g∈^+(Λ_#) such that g(T(v_1)) = T(v_2) as Hodge structures (so that g([v_1]) = [v_2]). The hypothesis T(v_1) = T(v_2) = Λ implies that the sublattice Λ⊂Λ_# is preserved by g, which gives an element g|_Λ∈(Λ). We need to check that g|_Λ∈^+(Λ). Since it maps [v_1] to [v_2], it takes the positive 2-plane in Λ⊗ spanned by Re(v_1), Im(v_1) to the positive 2-plane spanned by Re(v_2), Im(v_2), and preserves their natural orientations. Hence g|_Λ∈^+(Λ).
When the embedding Λ↪Λ_# is primitive, the degree of (<ref>) onto its image is less than or equal to |(D(Λ))|.
By Lemma <ref>, a very general fiber of (<ref>) is contained in an orbit of ^+(Λ) acting on Ω(Λ)/^+(Λ). Such an orbit has cardinality at most |^+(Λ)/^+(Λ)|, which is in turn less or equal to |(D(Λ))|, so the statement follows.
We want to extend the previous lemma to the situation when Λ↪Λ_# is not necessarily primitive. In this case, we let Λ_s⊂Λ_# be the saturation of Λ. This is again an even lattice of signature (2,n) and the embedding Λ_s ↪Λ_# is primitive.
When the embedding Λ↪Λ_# is not necessarily primitive, the degree of (<ref>) onto its image is less than or equal to [^+(Λ_s):^+(Λ)]· |(D(Λ_s))|.
According to Lemma <ref>, there are injections
^+(Λ)
↪^+(Λ_s)
↪^+(Λ_#).
This induces maps
^+_Λ→^+_Λ_s→^+_Λ_#
whose composition gives (<ref>). The first map has fibers of cardinality at most [^+(Λ_s):^+(Λ)]; the second map has degree onto its image bounded by |(D(Λ_s))| due to Lemma <ref>. This proves the claim.
We can finally give the bound for the map (<ref>).
Let Λ↪Λ_# be an embedding and Γ⊂^+(Λ) be an extendable subgroup of finite index. Let also Λ_s be the saturation of Λ in Λ_#. Then the degree of the map (<ref>) onto its image is less than or equal to
[^+(Λ)
: Γ∩^+(Λ)]
·
[^+(Λ_s)
: ^+(Λ)]
·
|(D(Λ_s))|.
Set ΓΓ∩^+(Λ). The map _Λ(Γ)→𝒫^+_Λ_# of (<ref>) is a factor of _Λ(Γ)→^+_Λ_# so the degree of the former is bounded by the degree of the latter. The latter map factors as
_Λ(Γ)
[r] ^+_Λ[r]^-(<ref>) ^+_Λ_#.
The first map is surjective with degree bounded by [^+(Λ):Γ]. By Lemma <ref>, the degree of the second map onto its image is bounded by [^+(Λ_s):^+(Λ)]· |(D(Λ_s))|. Hence the claim follows.
In order to apply the above lemma, we need to bound each factor. For a lattice M denote by ℓ(M) the minimal number of generators of the discriminant group D(M). Then an element in (D(M)) is determined by the images of the generators. Hence there is a bound
|(D(M))|
≤ |(D(M))|
≤ |disc(M)|^ℓ(M).
Let Λ↪Λ_# be a not necessarily primitive embedding and Λ_s⊂Λ_# be the saturation of Λ. Then we have
|(D(Λ_s))|
≤ |disc(Λ)|^ℓ(Λ).
There is a chain of finite index embeddings
Λ⊂Λ_s
⊂Λ_s^∨⊂Λ^∨.
In particular, we have equalities
D(Λ_s)
= Λ_s^∨/Λ_s
= (Λ_s^∨/Λ)/(Λ_s/Λ),
which is a quotient of Λ_s^∨/Λ by a finite index subgroup. Notice also that Λ_s^∨/Λ⊂Λ^∨/Λ = D(Λ). These facts imply
ℓ(Λ_s)≤ℓ(Λ) and disc(Λ_s)≤disc(Λ). Combining these inequalities gives
|(D(Λ_s))|
≤ |disc(Λ_s)|^ℓ(Λ_s)≤ |disc(Λ)|^ℓ(Λ),
as desired.
Let us bound another term in Lemma <ref>.
It holds that
[^+(Λ_s):^+(Λ)]
≤ [Λ_s:Λ]^rk(Λ)· |(D(Λ))|.
Let (Λ_s, Λ) be the group of isometries g∈(Λ_s) such that g(Λ)⊂Λ. Then
[^+(Λ_s)
: ^+(Λ)]
= [^+(Λ_s)
: ^+(Λ_s)
∩(Λ_s,Λ)]
·
[^+(Λ_s)
∩(Λ_s,Λ)
: ^+(Λ)].
By construction, (Λ_s,Λ) can be seen as a subgroup of (Λ), so the second factor on the right hand side of the above equality is bounded by [(Λ):^+(Λ)] ≤ |(D(Λ))|. For the first factor, observe that for every g∈^+(Λ_s) the image g(Λ) is a sublattice of Λ_s of index [Λ_s:Λ]; furthermore, g(Λ)=Λ if and only if g∈(Λ_s,Λ). This means that ^+(Λ_s) acts on the set Σ of sublattices of Λ_s of index equal to [Λ_s:Λ] and with ^+(Λ_s)∩(Λ_s,Λ) as the stabilizer of Λ. Thus [^+(Λ_s):^+(Λ_s)∩(Λ_s,Λ)] ≤ |Σ| ≤ [Λ_s:Λ]^rk(Λ), where the second inequality comes from <cit.>*Remark 2.
§.§ An even positive lattice of rank two
There is a special lattice which will play an important role in our application, so we collect here some useful facts about it. Let a,d be positive integers such that a+d = 4t for some integer t. Then the lattice is given by
Q_(a,d)[ 2a -a; -a 2t ].
Denote by z_1,z_2∈ Q_(a,d) the basis vectors with respect to the above matrix. Consider the reflection isometry along z_1 given by
σ_z_1 Q_(a,d)⟶ Q_(a,d) x⟼ x - 2(x,z_1)/(z_1,z_1)z_1.
Note that it acts as the identity on ⟨ z_1 ⟩^⊥ and as the minus of the identity on ⟨ z_1 ⟩. Our goal is to find an embedding of Q_(a,d) into a lattice independent of a and d such that σ_z_1 is extendable.
In the following, we will consider the sublattice
W = <w_1, w_2>⊂ Q_(a,d) where
w_1 z_1
and
w_2 2z_2 + z_1.
Notice that z_1 = w_1 and z_2 = 1/2(w_2-w_1), so W has index two in Q_(a,d). Moreover, we have w_1^2 = 2a, w_2^2 = 2d, and (w_1,w_2) = 0, so W is isomorphic to (2a)⊕(2d). Our strategy for finding a desired embedding for Q_(a,d) starts with finding an embedding for W with nice properties. We will need Lagrange's four square theorem, which states that any nonnegative integer a can be written as the sum of four squares, a=a_1^2+a_2^2+a_3^2+a_4^2. It turns out that a can be written as the sum of four coprime squares if and only if 8 does not divide a. (For a proof of this see <cit.>.)
Suppose that both a and d are not divisible by 4. Then there exists a primitive embedding Q_(a,d)↪ E_8 such that σ_z_1 is extendable.
Because a and d are not divisible by 4, each of them can be written as a sum of four coprime squares:
a = a_1^2 + a_2^2 + a_3^2 + a_4^2
and
d = b_1^2 + b_2^2 + b_3^2 + b_4^2.
The hypothesis also implies that the a_i (resp. the b_i) cannot be all even or all odd. We claim that, upon rearranging the indices, we can assume b_i± a_i is odd for all i. Indeed, we can write a + d = 4t as
∑_i=1^4 a_i^2 + ∑_j=1^4 b_j^2
≡ 0 4.
A direct check shows that the number of even a_i is the same as the number of odd b_i. Hence, upon rearranging the indices, we can assume that a_i is even if and only if b_i is odd for each i∈{1,2,3,4}, which implies that b_i± a_i is odd for each i.
Consider now the vector space ⊕_i=1^8 e_i with the standard Euclidean product. Then the lattice E_8 can be realized as the sublattice in ⊕_i=1^8 e_i whose coordinates are either all integers or all half integers such that the sum of all coordinates is even. There is an embedding W↪ E_8 given by
w_1 = a_1(e_1 - e_5)+ a_2(e_2 - e_6)+ a_3(e_3 - e_7)+ a_4(e_4 - e_8),
w_2 = b_1(e_1 + e_5)+ b_2(e_2 + e_6)+ b_3(e_3 + e_7)+ b_4(e_4 + e_8).
Now we have
z_2 = 1/2(w_2 - w_1)
= ∑_i=1^4b_i - a_i/2e_i
+ ∑_i=1^4b_i + a_i/2e_i+4
which belongs to E_8 since the b_i± a_i are all odd. Henece the embedding extends to an embedding Q_(a,d)↪ E_8. To check that σ_z_1 extends, consider the involution σ E_8 → E_8 defined by σ(e_i)=e_4+i for i=1,2,3,4. Then σ(w_1) = -w_1 = σ_z_1(w_1) and σ(w_2) = w_2 = σ_z_1(w_2), so that σ(z_1)=-z_1=σ_z_1(z_1) and σ(z_2) = z_2+z_1 = σ_z_1(z_1). This shows that σ is an extension of σ_z_1.
Let us prove that the embedding Q_(a,d)↪ E_8 is primitive. Every x∈ E_8∩ (Q_(a,d)⊗) can be written as
x = ∑_i=1^4 (
n_i + δ/2)e_i
+ ∑_i=1^4(
n_i+4 + δ/2)e_i+4
= A/Bz_1 + C/D z_2
where n_i,n_i+4∈ℤ, δ∈{0,1}, A,B,C,D∈ℤ with A,B coprime and C,D coprime. Expanding z_1,z_2 in e_1,…, e_8 and comparing the coefficients gives
n_i + δ/2
= A/Ba_i + C/2D(b_i-a_i),
n_i+4 + δ/2
= -A/Ba_i + C/2D(b_i+a_i),
for i=1,…,4
which implies
n_i + n_i+4 + δ
= C/D b_i,
n_i - n_i+4
= (2A/B - C/D)a_i,
for i=1,…,4.
The first set of equations in (<ref>) implies that D divides all the b_i. Since the b_i are coprime, we get D=1. The second set of equations shows that B divides 2a_i for i=1,…,4. Since the a_i are coprime, we conclude that B=1 or B=2. If B=1, then x∈ Q_(a,d) and we are done. If B=2, then A is odd. In this case, adding the two sets of equations in (<ref>) together modulo 2 gives
δ≡ C(b_i - a_i) + a_i
≡ C + a_i 2
where the second equality holds as b_i-a_i are all odd. Hence a_i≡δ - C 2, so the a_i are all even or all odd, which is a contradiction. Therefore, the embedding Q_(a,d)↪ E_8 is primitive. This completes the proof.
Suppose that one, and hence both, of a and d are divisible by 4. Then there exists an embedding Q_(a,d)↪ A_1^⊕ 10 such that σ_z_1 is extendable. Moreover, if Q_(a,d),s is the saturation, then [Q_(a,d),s:Q_(a,d)] = 2.
Let us write a/4-1 = a_1^2+…+a_4^2 and d/4-1=b_1^2+…+b_4^2 using Lagrange's four square theorem. Then there is an embedding W↪ A_1^⊕ 10 defined by
w_1 = 2a_1 e_1 + 2a_2 e_2 + 2a_3 e_3 + 2a_4 e_4 + 2e_5,
w_2 = 2b_1 e_6 + 2b_2 e_7 + 2b_3 e_8 + 2b_4 e_9 + 2e_10,
where {e_1,…,e_10} is the canonical basis for A_1^⊕ 10. Since w_2-w_1 is divisible by 2, this extends to an embedding of Q_(a,d) by taking z_1 = w_1 and z_2 = 1/2(w_2-w_1). The reflection σ_z_1 = σ_w_1 extends to the involution σ A_1^⊕ 10→ A_1^⊕ 10 defined by σ(e_i)=-e_i and σ(e_5+i)=e_5+i for i=1,…,5. Finally, one can verify that the saturation is given by Q_(a,d),s = <1/2z_1,z_2 > via a computation similar to the last part of the proof of Lemma <ref>. This proves the last assertion.
Now consider the lattice ℤ(2a)⊕ℤ(2d) with canonical basis {z_1,z_2} and let σ_z_1 denote the reflection along z_1. With a similar strategy, we can find an embedding for this lattice independent of a and d such that σ_z_1 extends.
There exists a primitive embedding ℤ(2a)⊕ℤ(2d) ↪ A_1^⊕ 10 such that σ_z_1 extends. When 8∤ a and 8∤ d, there exists a primitive embedding ℤ(2a)⊕ℤ(2d) ↪ A_1^⊕ 8 such that σ_z_1 extends
Write a-1 and d-1 as sums of four squares so that a=a_1^2+a_2^2+a_3^2+a_4^2+1 and d = b_1^2+b_2^2+b_3^2+b_4^2+1. Then the embedding of ℤ(2a)⊕ℤ(2d) into A_1^⊕ 10 is defined by
z_1 = a_1e_1+…+a_4e_4+e_5,
z_2 = b_1e_5+…+b_4e_9+e_10.
One can check directly that this is a primitive embedding. Moreover, σ_z_1 extends to the involution σ A_1^⊕ 10→ A_1^⊕ 10 defined by σ(e_i) = -e_i and σ(e_5+i) = e_5+i for i=1,…,5.
If 8 does not divide both a and d, we can write them as the sum of four coprime squares a = a_1^2+…+a_4^2 and d = b_1^2+…+b_4^2. Then the embedding into A_1^⊕ 8 is defined by
z_1 = a_1e_1 + … + a_4e_4,
z_2 = b_1e_5 + … + b_4e_8,
which is again primitive. Furthermore, σ_z_1 extends to the involution σ A_1^⊕ 8→ A_1^⊕ 8 defined by σ(e_i)=-e_i and σ(e_4+i)=e_4+i for i=1,…,4.
§.§ Degrees of irrationality of period spaces
Putting the previous results together allows us to generalize the techniques in <cit.>*Section 6. Suppose that Λ is an even lattice of signature (2,m) with an embedding into another even lattice Λ_# of signature (2,m'). Consider an arithmetic subgroup Γ⊂^+(Λ) which is extendable to Λ_#. We are going to bound the degree of irrationality of the period space 𝒫_Γ(Λ) in terms of this embedding. Let us first deal with the case when the embedding is primitive.
Suppose that the embedding Λ↪Λ_# is primitive and consider the natural map
f_Λ(Γ)
⟶^+_Λ_#.
Then there exists a constant C depending only on Λ_# such that, for every irreducible component Y⊂𝒫_Γ(Λ), we have the inequalities
(f(Y))≤ C·
|disc(Λ)|^1+m'/2,
(Y)≤ C
· [^+(Λ)
: Γ∩^+(Λ)]
· |(D(Λ))|
· |disc(Λ)|^1+m'/2.
Choose a basis w = (w_1,…,w_m'-m) for the orthogonal complement Λ^⊥Λ_# with moment matrix T = (1/2(w_i, w_j)). Then Ω(Λ) = ⟨w⟩^⊥⊂Ω(Λ_#). Thus f(Y) appears as the Zariski open subset of a component of the special cycle Z_T,0⊂^+_Λ_#. By Lemma <ref>, there exist a constant C' depending only on Λ_# such that
irr(f(Y))
≤ C' · |(T)|^1+m'/2.
Note that (cf. <cit.>*Section 14.0.2)
(T)
= 1/2^m'-m·|disc(Λ^⊥)|
= 1/2^m'-m·
|disc(Λ_#)|
/
|disc(Λ)|
·[Λ_#:Λ⊕Λ^⊥]^2.
From <cit.>*Section 7, (36), one can deduce that
[Λ_#:Λ⊕Λ^⊥]
≤ |disc(Λ)|.
Hence
|(T)|
≤ |disc(Λ_#)|
· |disc(Λ)|
By setting C C'· |disc(Λ_#)|^1+m'/2, we obtain the first inequality
(f(Y))≤ C·
|disc(Λ)|^1+m'/2.
By Lemma <ref>, we have
(Y)≤(f(Y))·
[^+(Λ)
: Γ∩^+(Λ)]
·|(D(Λ))|.
Combining this with the previous inequality gives us the second inequality.
Now let us deduce a bound when the embedding is not necessarily primitive.
Let Λ_s⊂Λ_# be the saturation of Λ and assume that [Λ_s:Λ]≤ D for some constant D. Let also ℓ(Λ) be the minimum number of generators for D(Λ). Then there exists a constant C depending only on Λ_# and D such that for every irreducible component Y⊂𝒫_Γ(Λ), it holds that
(Y) ≤ C
· [^+(Λ)
: Γ∩^+(Λ)]
· |disc(Λ)|^1+m'/2+2ℓ(Λ).
By Lemma <ref>, the degree of f_Λ(Γ)⟶^+_Λ_# onto its image is bounded by
[^+(Λ):Γ∩^+(Λ)]
· [^+(Λ_s):^+(Λ)]
· |(D(Λ_s))|.
Lemma <ref> shows that |(D(Λ_s))| ≤ |disc(Λ)|^ℓ(Λ). Combining this with Lemma <ref> gives
[^+(Λ_s)
: ^+(Λ)]
· |(D(Λ_s))|
≤ [Λ_s:Λ]^m+2· |(D(Λ))| · |disc(Λ)|^ℓ(Λ)
≤ D^m+2· |disc(Λ)|^2ℓ(Λ)≤ D^m'+2· |disc(Λ)|^2ℓ(Λ).
As a result, we obtain
(Y) ≤(f(Y))
· [^+(Λ)
: Γ∩^+(Λ)]
· D^m'+2· |disc(Λ)|^2ℓ(Λ).
Recall from the proof of Lemma <ref> that
|disc(Λ_s)|
≤ |disc(Λ)|.
To bound (f(Y)), first notice that _Λ(Γ) = _Λ_s(Γ), so we can consider f as a map from _Λ_s(Γ) to ^+_Λ_#. Then Lemma <ref> shows that there exists C'>0 depending only on Λ_# such that
(f(Y))
≤ C'· |disc(Λ_s)|^1+m'/2≤ C'· |disc(Λ)|^1+m'/2.
Merging this into the above inequality with C = C'· D^m'+2 gives the bound we want.
§ MODULI SPACES OF PROJECTIVE HYPERKÄHLER MANIFOLDS
Let X be a projective hyperkähler manifold. Then the group H^2(X,) endowed with the Beauville–Bogomolov–Fujiki form is an even lattice of signature (3,b_2(X)-3). For each element h∈ H^2(X,), one defines its divisibility div_Λ(h) to be the positive generator of the ideal (h, H^2(X,))⊂. Let us fix a lattice Λ isometric H^2(X,) and let γ, d be positive integers. Then the pairs (X',H) of hyperkähler manifolds equipped with a primitive ample divisor H such that H^2(X,)≅Λ and h c_1(H) satisfies div_Λ(h) = γ, (h,h) = 2d form a moduli space ℳ_Λ, 2d^γ of dimension b_2(X)-3.
The monodromy group Mon^2(X)⊂^+(H^2(X,)) for all types of X considered in this paper is normal and of finite index. Via a marking Λ≅ H^2(X,), this defines a subgroup
Mon^2(Λ)⊂^+(Λ)
independent of the choice of markings. Let h∈Λ be a primitive ample class and define Λ_h h^⊥Λ. Then Λ_h has signature (2,b_2(X)-3) and discriminant
|disc(Λ_h)|
= 2d· | disc(Λ)|/γ^2.
The elements of Mon^2(Λ) fixing h form a finite index subgroup Mon^2(Λ, h), which can be identified as a subgroup of ^+(Λ_h) by restriction. Let Y⊂ℳ_Λ, 2d^γ be the irreducible component containing X. Then there exists an open embedding
Y@^(->[r] Ω(Λ_h)/Mon^2(Λ, h)
as guaranteed by the Torelli theorem <cit.>; see also <cit.>*Lemma 8.1. This allows us to bound the degree of irrationality of ℳ_Λ, 2d^γ using the results in the previous section. To obtain a universal bound for all the moduli spaces, what we need to do is to find embeddings of all possible Λ_h into a common lattice Λ_# such that Mon^2(Λ,h) is extendable.
§.§ Hyperkähler manifolds of K3n-type
Let X be a hyperkähler manifold deformation equivalent to the Hilbert scheme of length n subschemes on a K3 surface. Here we assume that n≥ 2 since the case of K3 surfaces has already been treated in <cit.>. For such an X, the lattice H^2(X,ℤ) is isomorphic to
Λ = Λ_K3^[n]
E_8(-1)^⊕ 2⊕ U^⊕ 3⊕δ where
(δ,δ)=-2(n-1).
This is an even lattice of signature (3,20). According to <cit.>*Lemma 9.2, we have
Mon^2(Λ)
= ^+(Λ)
{
g∈^+(Λ)
|
g|_D(Λ) = ±id}.
Take a primitive h∈Λ with (h,h)>0. Then
Λ_h ≅
E_8(-1)^⊕ 2⊕ U^⊕ 2⊕ Q_h(-1)
where Q_h(-1)⊂ U⊕δ is a certain negative definite rank two sublattice. We will need an explicit description of this lattice only in the cases of divisibility γ=1,2.
If γ=1, then Q_h ≅ℤ(2(n-1))⊕ℤ(2d). If γ=2, then Q_h ≅ Q_(n-1,d) as defined in (<ref>).
This is proved in <cit.>*Remark 3.3 and Equation (31).
The lattice Λ_h has signature (2,20) and discriminant disc(Λ_h) = 4d(n-1)/γ^2. On can verify by definition that γ divides both 2d and 2(n-1). The monodromy group Mon^2(Λ,h) coincides with ^+(Λ,h) = (Λ,h)∩^+(Λ), which can then be described explicitly as follows:
Let (n,d,γ) be as above with ℳ_K3^[n], 2d^γ non-empty.
* If n=2 or γ≥ 3, then
^+(Λ,h)
= ^+(Λ_h).
* If n≥ 3 and γ=1,2, then
^+(Λ,h) = ⟨^+(Λ_h), σ_z_1⟩,
where σ_z_1∈^+(Λ_h) is the reflection along the primitive vector z_1∈ Q_h(-1) of the canonical basis (z_1,z_2) given in the description of Lemma <ref>.
The first case is proved in <cit.>*Lemma 3.6 and Proposition 3.7. The second case is proved in <cit.>*Section 5.
With this we can state the extendability result that we are going to need:
In each of the following cases, one can find an embedding Λ_h↪Λ_# for some even lattice Λ_# such that ^+(Λ,h) extends:
* If n=2 or γ≥ 3, then one can choose Λ_# = U^⊕ 2⊕ E_8(-1)^⊕ 3 with the embedding primitive.
* If n≥ 3 and γ=1, then one can choose Λ_# =U^⊕ 2⊕ E_8(-1)^⊕ 2⊕ A_1(-1)^⊕ 10 with the embedding primitive.
* If n≥ 3,γ=1, and 8∤ n-1,8∤ d, then Λ_# =U^⊕ 2⊕ E_8(-1)^⊕ 2⊕ A_1(-1)^⊕ 8 and the embedding can be chosen primitive.
* If n≥ 3,γ=2, and 4∤ n-1,4∤ d, then Λ_# =U^⊕ 2⊕ E_8(-1)^⊕ 3 and the embedding can be chosen primitive.
* If n≥ 3,γ=2, and 4| n-1,4| d, then Λ_# =U^⊕ 2⊕ E_8(-1)^⊕ 2⊕ A_1(-1)^⊕ 10. Let Λ_h,s be the saturation. Then [Λ_h,s:Λ_h]=2.
Let us prove the statement case-by-case.
* If n=2 or γ≥ 3, the group ^+(Λ,h) coincides with the stable orthogonal group ^+(Λ_h) by Lemma <ref>. Hence, it is extendable with respect to any embedding. We can simply choose a primitive embedding Q_h(-1) ↪ E_8(-1) (cf. <cit.>*Corollary 14.1.9) to obtain a primitive embedding of Λ_h into U^⊕ 2⊕ E_8(-1)^⊕ 3.
In all the other cases, Lemma <ref> shows that ^+(Λ,h)=⟨^+(Λ_h),σ_z_1⟩. Suppose we can find an embedding Q_h(-1)↪ M such that σ_z_1 on Q_h(-1) extends to σ on M. Then we have an embedding of Λ_h = U^⊕ 2⊕ E_8(-1)^⊕ 2⊕ Q_h(-1) into U^⊕ 2⊕ E_8(-1)^⊕ 2⊕ M such that σ_z_1 extends to Id⊕σ, where Id is the identity map on U^⊕ 2⊕ E_8(-1)^⊕ 2. Since Id⊕σ acts as the identity on U^⊕ 2, it is orientation-preserving. Moreover, if we let Q_h(-1)_s be the saturation of Q_h(-1) inside M, then [Λ_h,s:Λ_h] = [Q_h(-1)_s:Q_h(-1)]. Thus, the problem is reduced to finding appropriate embeddings of Q_h(-1).
* If n≥ 3 and γ=1, Lemma <ref> shows that Q_h(-1) ≅ℤ(-2(n-1))⊕ℤ(-2d). Then Lemma <ref> gives a primitive embedding Q_h(-1)↪ A_1^⊕ 10(-1) such that σ_z_1 extends.
* If n≥ 3,γ=1,8∤ n-1,8∤ d, Lemma <ref> shows that Q_h(-1) ≅ℤ(-2(n-1))⊕ℤ(-2d). Then Lemma <ref> gives a primitive embedding Q_h(-1)↪ A_1^⊕ 8(-1) such that σ_z_1 extends.
* If n≥ 3,γ=2, and 4∤ n-1,4∤ d, Lemma <ref> shows that Q_h(-1) ≅ Q_(n-1,d)(-1). Then Lemma <ref> gives a primitive embedding Q_h(-1)↪ E_8(-1) such that σ_z_1 extends.
* If n≥ 3,γ=2, and 4| n-1,4| d, Lemma <ref> shows that Q_h(-1) ≅ Q_(n-1,d)(-1). Then Lemma <ref> gives an embedding Q_h(-1)↪ A_1^⊕ 10(-1) such that σ_z_1 extends. In this case, we have [Q_h(-1)_s:Q_h(-1)] = 2.
This completes the proof.
We can now deduce our result for hyperkähler manifolds of K3^[n]-type.
There exists a constant C>0 such that for any n,d,γ and for any irreducible component Y⊂^γ_K3^[n],2d it holds that
(Y)≤ C · (n· d)^19.
Furthermore, for any ε > 0, there exists a constant C_ε>0 such that the above bound can be refined in each case as follows:
* If n=1, then (Y) ≤ C_ε· (n· d)^14+ε.
* If n=2 or γ≥ 3, then (Y)≤ C· (n· d)^16. If furthermore γ,2d/γ,2(n-1)/γ are coprime, then (Y) ≤ C_ε· (n· d)^14+ε.
* If n≥ 3 and γ=1, then (Y)≤ C· (n· d)^17. If furthermore 8∤ n-1 and 8∤ d, then (Y) ≤ C_ε· (n· d)^14+ε.
* If n≥ 3,γ=2 and 4∤ n-1, 4∤ d, then (Y) ≤ C· (n· d)^16.
Recall that the component Y is birational to Ω(Λ_h)/Mon^2(Λ,h) for some h∈Λ. Let us start the proof by analyzing the situation in each case.
* If n=1, note that ^γ_K3^[1], 2d is nonempty if and only if γ = 1 and in this case it is irreducible. Then it was proven in <cit.> that for any ε>0 there exists C_1,ε>0 such that for all d>0, it holds that
(^γ_K3^[1],2d)
≤ C_1,ε· (n· d)^14+ε.
* If n=2 or γ≥ 3, Lemma <ref> shows that we can find a primitive embedding of Λ_h into the lattice Λ_# = U^⊕ 2⊕ E_8(-1)^⊕ 3 such that Mon^2(Λ,h) is extendable. Note that the lattice Λ_# is independent of n,d,γ, and Y. Then Lemma <ref> shows that there is a constant C_2 depending only on Λ_# such that
(Y)≤ C_2 · |(D(Λ_h))| · |disc(Λ_h)|^14,
where we use [^+(Λ_h):Mon^2(Λ,h) ∩^+(Λ_h)]=1 from Lemma <ref>.
On the other hand, D(Λ_h) = D(Q_h(-1)) is generated by at most 2 elements, hence Lemma <ref> shows that |(D(Λ_h))| ≤ |disc(Λ_h)|^2, thus
(Y) ≤ C_2
· |disc(Λ_h)|^16≤ C_2 · (n· d)^16,
where the second inequality follows from |disc(Λ_h)| = 4d(n-1)/γ^2. If γ,2d/γ,2(n-1)/γ are coprime, then <cit.>*Proposition 3.12 (ii) shows that for every ε>0, there exists a constant C'_2,ε such that |(D(Λ_h))|≤ C'_2,ε· |disc(Λ_h)|^ε. Plugging this into (<ref>) and setting C_2,ε = C_2· C'_2,ε, we get the bound.
(Y)≤ C_2,ε· |disc(Λ_h)|^14+ε≤ C_2,ε· (n· d)^14+ε,
where the second inequality comes from plugging in |disc(Λ_h)| = 4d(n-1)/γ^2.
* If n≥ 3 and γ= 1, then Lemma <ref> shows that we have a primitive embedding of Λ_h into U^⊕ 2⊕ E_8(-1)^⊕ 2⊕ A_1(-1)^⊕ 10 such that Mon^2(Λ,h) is extendable. Then, reasoning as in the proof of equation (<ref>), we get a constant C_3 such that
(Y) ≤ C_3 · (n· d)^17.
If furthermore 8∤ d, 8∤ n-1, then Lemma <ref> shows that we can find a primitive embedding of Λ_h into the lattice U^⊕ 2⊕ E_8(-1)^⊕ 2⊕ A_1(-1)^⊕ 8 such that Mon^2(Λ,h) is extendable. Moreover, <cit.>*Proposition 3.12 (ii) shows that for any ε>0, there exists a constant C'_3,ε such that |(D(Λ_h))|≤ C'_3,ε· |disc(Λ_h)|^ε. Hence, reasoning as in the proof of inequality (<ref>) we see that for any ε>0, there exists C_3,ε>0 such that
(Y)
≤ C_3,ε· (n· d)^14+ε.
* Suppose n≥ 3 and γ= 2. If 4∤ n-1,4∤ d, Lemma <ref> shows that we can find a primitive embedding of Λ_h into the lattice U^⊕ 2⊕ E_8(-1)^⊕ 3 such that Mon^2(Λ,h) is extendable. Then reasoning as in the proof of equation (<ref>) we see that there exists a constant C_4 such that
(Y) ≤ C_4 · (n· d)^16.
If 4| d and 4| n-1, then Lemma <ref> shows that we can find an embedding of Λ_h into the lattice U^⊕ 2⊕ E_8(-1)^⊕ 2⊕ A_1(-1)^⊕ 10 such that Mon^2(Λ,h) is extendable. The saturation Λ_h,s of Λ_h satisfies [Λ_h,s:Λ_h]=2; the group D(Λ_h)=D(Q_h(-1)) is generated by at most 2 elements; Lemma <ref> shows that [^+(Λ_h):Mon^2(Λ,h) ∩^+(Λ_h)]=1. Hence, Lemma <ref> asserts that there exists a constant C_5>0 such that
(Y) ≤ C_5
· |disc(Λ_h)|^19≤ C_5 · (n· d)^19,
where the second inequality comes from
|disc(Λ_h)|
= 4d(n-1)/γ^2.
Set C=max{C_1,8,C_2,C_3,C_4,C_5} and C_ε = max{ C_1,ε,C_2,ε,C_3,ε}. The main inequality in the statement then follows from (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>). The inequality in Case 1 follows from (<ref>), the inequalities in Case 2 follow from (<ref>) and (<ref>), the inequalities in Case 3 follow from (<ref>) and (<ref>) and the inequality in Case 4 follows from (<ref>).
§.§ Hyperkähler manifolds of Kum-n type
Let X be a projective hyperkähler manifold of dimension 2n deformation equivalent to the generalized Kummer variety of an abelian surface. In this case, the even lattice H^2(X,ℤ) has signature (3,4) and is isomorphic to
Λ = Λ_Kum_n U^⊕ 3⊕ℤη where
(η,η) = -2(n+1).
According to <cit.> and <cit.>*Theorem 1.4, the monodromy group Mon^2(Λ) is a subgroup of index at most two of ^+(Λ) which contains
SO^+(Λ)
= ^+(Λ)∩SO(Λ).
In particular, we have the following:
Let h∈Λ be a primitive vector of divisibility γ and positive square (h,h)=2d. Up to the action of Mon^2(Λ), we can assume h=γ(e+tf)-aη, where {e,f} is the standard basis for the first copy of U, and t,a are integers such that
d = γ^2t-(n+1)a^2,
(a,γ)=1,
0≤ a<γ.
Let us prove that, up to the action of SO^+(Λ)⊂Mon^2(Λ), the vector h can be expressed in the desired form . By Eichler's Criterion (cf. <cit.>*Proposition 2.15), two primitive vectors h_1,h_2∈Λ are in the same SO^+(Λ)-orbit if and only if they have the same square (h_1,h_1)=(h_2,h_2)=2d, the same divisibility γ and the same classes [ h_1/γ] = [ h_2/γ] in D(Λ)=D(ℤη) ≅ℤ/2(n+1)ℤ. We can then proceed as in the proofs <cit.>*Proposition 3.1 and Lemma 3.4, where we only need to replace U^⊕ 3⊕ E_8(-1)^⊕ 2 and n-1 respectively by U^⊕ 3 and n+1.
Let h∈Λ be a primitive vector as in Lemma <ref>. Then
Λ_h ≅ U^⊕ 2⊕ Q_h(-1)
where Q_h(-1)⊂ U⊕η is a certain negative definite rank two sublattice which can be described explicitly:
If γ=1, then Q_h ≅ℤ(2(n+1))⊕ℤ(2d). If γ=2 then Q_h ≅ Q_(n+1,d) as defined in (<ref>). If γ≥ 3, then
Q_h = [ 2(n+1) -2a(n+1)/γ; -2a(n+1)/γ 2t ]
with a,t,γ as in Lemma <ref>.
This can be proved using Lemma <ref> in the same way as one proves <cit.>*Remark 3.3 and Equation (31), where one only need to replace U^⊕ 3⊕ E_8(-1)^⊕ 2 and n-1 respectively by U^⊕ 3 and n+1.
The lattice Λ_h has signature (2,4) and discriminant disc(Λ_h) = 4d(n+1)/γ^2. Note that γ divides both 2d and 2(n+1). The monodromy group Mon^2(Λ,h) is a subgroup of index at most two of ^+(Λ,h) = (Λ,h)∩^+(Λ) which can be described explicitly as follows.
Let (n,d,γ) be as above with ℳ_Kum^[n],2d^γ non-empty.
* If γ≥ 3, then
^+(Λ,h)
= ^+(Λ_h).
* If γ=1,2 then
^+(Λ,h) = ⟨^+(Λ_h), σ_z_1⟩,
where σ_z_1∈^+(Λ_h) is the reflection along the basis element z_1∈ Q_h(-1) of the canonical basis {z_1,z_2} given in the description of Lemma <ref>.
This can be proved in the same way as <cit.>*Lemma 3.6 and Proposition 3.7 with ℓ replaced by η, (n-1) replaced by (n+1), and U^⊕ 2⊕ E_8(-1)^⊕ 2⊕ Q_h(-1) replaced by U^⊕ 2⊕ Q_h(-1).
As a consequence, we obtain an analogue of Lemma <ref>.
There exists a lattice Λ_# together with an embedding Λ_h↪Λ_# as follows such that ^+(Λ,h) extends.
* If γ≥ 3, then Λ_# = U^⊕ 2⊕ E_8(-1); the embedding is primitive.
* If γ=1, then Λ_# =U^⊕ 2⊕ A_1(-1)^⊕ 10; the embedding is primitive.
* If γ=1, and 8∤ n-1,8∤ d, then Λ_# =U^⊕ 2⊕ A_1(-1)^⊕ 8; the embedding is primitive.
* If γ=2, and 4∤ n-1,4∤ d, then Λ_# =U^⊕ 2⊕ E_8(-1); the embedding is primitive.
* If γ=2, and 4| n-1,4| d, then Λ_# =U^⊕ 2⊕ A_1(-1)^⊕ 10. If Λ_h,s is the saturation, then [Λ_h,s:Λ_h]=2.
Since Mon^2(Λ,h)⊂^+(Λ,h), it is enough to show the extendability of the latter. The proof is similar to the proof of Lemma <ref>, so we leave it to the reader.
We are ready to prove our main result for hyperkähler manifolds of Kum_n-type.
There exists a constant C>0 such that for any n,d,γ and for any irreducible component Y⊂ℳ^γ_Kum_n,2d, it holds that
(Y) ≤ C · (n· d)^11.
Furthermore, for every ε>0, there exists a constant C_ε>0 such that the above bound can be refined in each case as follows:
* If γ≥ 3, then (Y)≤ C· (n· d)^8. If furthermore γ,2d/γ,2(n+1)/γ are coprime, then (Y) ≤ C_ε· (n· d)^6+ε.
* If γ = 1, then (Y)≤ C· (n· d)^9. If furthermore 8∤ n-1,8∤ d, then (Y) ≤ C_ε· (n· d)^6+ε.
* If γ=2 and 4∤ n-1, 4∤ d, then (Y) ≤ C· (n· d)^8.
The proof uses Lemma <ref> and is similar to the proof of Theorem <ref>. The only difference is the factor
[^+(Λ_h)
: Mon^2(Λ,h)
∩^+(Λ_h)].
In the current case, Mon^2(Λ,h) is a subgroup of index at most two in ^+(Λ,h), so that Mon^2(Λ,h)∩^+(Λ_h) is a subgroup of index at most two in
^+(Λ,h)
∩^+(Λ_h)
= ^+(Λ_h),
where last equality follows from Lemma <ref>.
§.§ Hyperkähler manifolds of OG10 type
Now consider hyperkähler manifolds X deformation equivalent to O'Grady's ten-dimensional example <cit.>. Then H^2(X,ℤ) is isomorphic to the lattice Λ = Λ_OG10:= U^⊕ 3⊕ E_8(-1)^⊕ 2⊕ A_2(-1); see <cit.>. Note that this is an even lattice of signature (3,21). The monodromy group Mon^2(Λ) coincides with the group ^+(Λ) = ^+(Λ); see <cit.>. If h ∈Λ is a class with positive square, then its divisibility can only be γ=1 or γ=3. In this setting, the lattice Λ_h is of the form
Λ_h
≅ U^⊕ 2⊕ E_8(-1)^⊕ 2⊕ Q_h(-1)
where Q_h(-1) is a certain negative definite lattice of rank 3. Note that the lattice Λ_h has signature (2,21) and discriminant |disc(Λ_h)| = 6d/γ^2. The relevant monodromy group can be described explicitly as follows.
Consider the subgroup Mon^2(Λ,h) ⊂^+(Λ_h).
* If γ=3, then Mon^2(Λ,h) = ^+(Λ_h). In this case, there exists a primitive embedding of Λ_h into Λ_# = U^⊕ 2⊕ E_8(-1)^⊕ 3 such that Mon^2(Λ,h) is extendable.
* If γ=1, then Q_h(-1)≅ A_2(-1)⊕ℤℓ with (ℓ,ℓ)=-2d. In this case, the monodromy group has the form Mon^2(Λ,h) = ⟨^+(Λ_h), σ_v ⟩ where σ_v is the reflection with respect to a vector v∈ A_2(-1) with (v,v) = -6. Moreover, there exists a primitive embedding of Λ_h into Λ_# = U^⊕ 2⊕ E_8(-1)^⊕ 2⊕ A_2(-1)⊕ A_1(-1)^⊕ 5 such that Mon^2(Λ,h) is extendable.
The formulas for Mon^2(Λ,h) and Q_h(-1) in both cases are part of <cit.>*Theorem 3.1 and the proof of <cit.>*Lemma 4.4. Let us find the lattice Λ_# and prove the extendability case-by-case.
* Suppose that γ = 3. First note that Q_h(-1) has signature (0,3), so it can be embedded as a primitive sublattice of E_8(-1) (cf. <cit.>*Corollary 14.1.9). This induces a primitive embedding Λ_h↪Λ_# = U^⊕ 2⊕ E_8(-1)^⊕ 3. Because Mon^2(Λ,h) = ^+(Λ_h) in this case, it is extendable for every such embedding.
* Suppose that γ = 1. Using Lagrange's four square theorem, we can write d = a_1^2 + a_2^2 + a_3^2 + a_4^2 + 1 for some integers a_1,…,a_4. This induces a primitive embedding of ℓ into A_1(-1)^⊕ 5 which maps a generator to (a_1,…,a_4,1). This defines a primitive embedding of Q_h(-1) into A_2(-1)⊕ A_1(-1)^⊕ 5 and thus induces a primitive embedding of Λ_h into Λ_# = U^⊕ 2⊕ E_8(-1)^⊕ 2⊕ A_2(-1)⊕ A_1(-1)^⊕ 5. The reflection σ_v can be extended to Λ_# such that the action on Λ_h^⊥ = U^⊕ 2⊕ E_8(-1)^⊕ 2⊕ A_1(-1)^⊕ 5 is the identity.
Let ℳ_OG10, 2d^γ be the moduli space of hyperkähler manifolds of type OG10 with a primitive polarization of degree 2d and divisibility γ. For both γ=1 and 3, this moduli space is irreducible; see Ono22b, Ono22 or <cit.>*Proposition 3.4. In the non-split case, ℳ_OG10, 2d^γ is non-empty if and only if d≡ 6 9. Moreover, all moduli spaces ℳ_OG10, 2d^γ are of general type with finitely many exceptions GHS11,BBBF23. We can now get a polynomial bound on their degrees of irrationality.
For any ε>0, there exists a constant C_ε>0 independent of γ,d such that
(ℳ_OG10, 2d^γ)≤ C_ε· d^14+ε.
Moreover if γ=1 the bound can be refined to (ℳ_OG10, 2d^1)≤ C_ε· d^27/2+ε.
One uses Lemma <ref>, proceeding as in the proof of Theorem <ref>, in the case n=2 or γ≥ 3.
The only different step here is the estimate of |(D(Λ_h))|. We need to prove that for any ε>0, there exists C'_ε>0 such that
|(D(Λ_h))|<C'_ε· d^ε.
If γ=3, then this property follows from <cit.>*Theorem 3.1. If instead γ=1, then we have D(Λ_h)=D(ℤℓ)⊕ D(A_2(-1)) where the direct sum is with respect to the ℚ/ 2ℤ-valued quadratic form. If (ℓ,e_1,e_2) is a canonical basis of ℤℓ⊕ A_2(-1) , then D(Λ_h) is generated by ℓ/2d,2/3e_1+1/3e_2 modulo Λ_h.
An element A∈(D(Λ_h)) is determined by the images of the generators,
A( ℓ/2d) = a_11( ℓ/2d) + a_21( 2/3e_1+1/3e_2 ), A( 2/3e_1+1/3e_2 ) = a_12( ℓ/2d) + a_22( 2/3e_1+1/3e_2 )
hence it can be represented by a 2× 2 matrix A=(a_ij)_i,j=1,2 with a_1j∈{0,1,…,2d-1}, and a_2j∈{0,1,2}. The fact that A preserves the quadratic form implies that
A^t [ 1/2d 0; 0 2/3 ] A
≡ [ 1/2d 0; 0 2/3 ] ℤ.
Which can be written down explicitly as
1/2da_11^2+2/3a_21^2 ≡1/2d,
1/2da_11a_12+2/3a_21a_22≡ 0,
1/2da_12^2+2/3a_22^2 ≡2/3 ℤ
Suppose first that 3∤ d. Then multiplying the equations of (<ref>) by 6d we get
3· a_11^2+4d· a_21^2 ≡ 3,
3· a_11a_12+4d· a_21a_22≡ 0,
3· a_12^2+4d· a_22^2 ≡ 4d
6d·ℤ
Looking at the last equations modulo d and assuming that 3 is coprime to d, we get
a_11^2≡ 1
and
a_11· a_12≡ 0 d.
In particular, a_12≡ 0 modulo d, so it must belong to {0,d}. On the other hand, looking at the decomposition d=p_1^k_1⋯ p_s^k_s, where p_i's are distinct primes, by the Chinese remainder theorem, we see that the number of solutions to x^2≡ 1 modulo d is bounded by 2^ρ(d), where ρ(d) is the number of distinct prime factors of d. Hence, there are at most 2· 2^ρ(d) possible values for a_11 modulo 2d. Since a_21,a_22 can take at most three possible values, all these facts yield |(D(Λ))| ≤ 36· 2^ρ(d). If instead d=3r, we can multiply equations (<ref>) by 6r and use a similar reasoning. As a consequence, for any ε>0, we see that there exists C'_ε>0 such that |(D(Λ_h))| ≤ C'_ε· d^ε.
§.§ Hyperkähler manifolds of OG6 type
Let X be a hyperkähler manifold of dimension 2n deformation equivalent to O'Grady's six-dimensional example <cit.>. Then the group H^2(X,ℤ) is isomorphic to the lattice Λ = Λ_ OG6:= U^⊕ 3⊕ A_1(-1)^⊕ 2, which is an even lattice of signature (3,5). In this case, we have that (cf. <cit.>)
Mon^2(Λ)
= ^+(Λ)
= ^+(Λ)
= ^+(Λ).
Let h ∈Λ be a primitive class with (h,h)=2d>0. Then its divisibility is either γ=1 or γ=2, and the lattice Λ_h has signature (2,5) and discriminant |disc(Λ_h)| = 8d/γ^2. In this case, we have
Mon^2(Λ,h)
= (Λ,h) ∩^+(Λ)
<cit.>.
Let {e,f} be a standard basis of a copy of U and let (v_1,v_2) be the canonical basis of A_1(-1)^⊕ 2. Then h and Λ_h have the following forms up to the action of Mon^2(Λ).
* If γ=1, then h=e+df and Λ_h = U^⊕ 2⊕⟨ e-df ⟩⊕ A_1(-1)^⊕ 2.
* If γ=2, then h=2(e+tf)-ω for some ω∈{v_1,v_2,v_1+v_2}. In this case, we have Λ_h = U^⊕ 2⊕ Q_h(-1) where
Q_h(-1) = ⟨ f-v_1,e-tf,v_2 ⟩, if ω=v_1,
Q_h(-1) = ⟨ f-v_2,e-tf,v_1 ⟩, if ω=v_2,
Q_h(-1) = ⟨ f-v_1,f-v_2,e-tf ⟩, if ω=v_1+v_2.
Using Eichler's criterion as in the proof of Lemma <ref>, we can write h in the forms h=e+df for γ=1 and h=2(e+tf)-ω with ω∈{v_1,v_2,v_1+v_2} for γ=2. Based on this, the lattice Λ_h can be computed explicitly.
We have the equality
Mon^2(Λ,h)
= ^+(Λ_h).
As a result, there exists a primitive embedding
Λ_h↪Λ_# = U^⊕ 2⊕ E_8(-1)
such that Mon^2(Λ,h) extends.
Proving the equality is the same as proving that
^+(Λ_h)
= (Λ,h)∩^+(Λ).
The inclusion ^+(Λ_h)⊂(Λ,h)∩^+(Λ) follows from Lemma <ref>. To prove the converse, let us first show that, if x∈Λ_h^∨, then x∈· h + Λ^∨; since Λ_h⊂Λ^∨, it suffices to check this for those x∈Λ_h^∨ whose classes generate D(Λ_h). Let us proceed case-by-case:
* Assume first that γ=1. Then in the notation of Lemma <ref> one can compute that
D(Λ_h) = ⟨1/2d(e-df), 1/2v_1,1/2v_2
⟩
Observe that 1/2v_1,1/2v_2∈Λ^∨ and 1/2d(e-df) = 1/2d(e+df)-f = 1/2dh-f with f∈Λ.
* Assume now that γ=2. Due to Lemma <ref>, we can assume h = 2(e+tf)-ω with ω∈{v_1,v_2,v_1+v_2}. If ω=v_1, then one can verify using the same lemma that
D(Λ_h) = D(Q_h(-1)) = ⟨1/2dh - 1/2v_1,
1/dh - f,
1/2v_2
⟩ = ⟨1/2dh - 1/2v_1,
1/2v_2
⟩
where the third equality comes from the relation 1/dh-f = 2(1/2dh-1/2v_1)- (f-v_1) and the fact that f-v_1∈ Q_h(-1). Observe that the generators are all contained in ℚ· h+Λ^∨. The case ω=v_2 can be solved in a similar way. If instead ω=v_1+v_2, then one can verify that
D(Λ_h) = D(Q_h(-1))
= ⟨1/2dh - 1/2v_1,
1/2dh - 1/2v_2,
1/dh - f
⟩
= ⟨1/2dh - 1/2v_1,
1/2dh - 1/2v_2
⟩.
Observe that these generators are contained in ℚ· h+Λ^∨.
This completes the proof for the inclusion Λ_h^∨⊂· h + Λ^∨.
Now, pick any g∈(Λ,h)∩^+(Λ) and x∈Λ_h^∨. Then x∈ℚ· h + Λ^∨, which allows us to write x=α· h + y with α∈ℚ and y∈Λ^∨. It follows that
g(x) - x
= g(α· h) + g(y) - α· h - y
= g(y) - y ∈Λ.
We also have
(g(x)-x, h)
= (g(x), h) - (x, h)
= (x, g^-1(h)) - (x, h)
= (x, h) - (x, h)
= 0.
These two relations imply that g(x) - x∈Λ_h^∨. Hence g∈^+(Λ_h). This finishes the proof of the equality
^+(Λ_h)
= (Λ,h)∩^+(Λ).
Now, observe that ℓ(Λ_h)≤ 3 from Lemma <ref>. This fact, together with <cit.>*Theorem 14.1.15, shows that there exists a primitive embedding Λ_h ↪Λ_#. Moreover, the above argument shows that Mon(Λ,h)=^+(Λ_h) and so it extends under this embedding. This proves the last statement.
Let ℳ_OG6, 2d^γ be the moduli space of hyperkähler manifolds of type OG6 and polarized by a primitive ample class of degree 2d and divisibility γ. By <cit.>*Propositions 3.4 and 3.6, the space ℳ_OG6, 2d^γ is irreducible. Moreover, the proof of Lemma <ref> shows that if the moduli space is non-empty, then either γ=1, or γ=2 and d≡ 2,3 4.
For any ε>0, there exists C_ε>0 independent of d and γ such that
(
ℳ_OG6, 2d^γ)
≤ C_ε· d^6+ε.
The proof uses Lemma <ref> and is analogous to the proof of Theorem <ref>. The only difference is the estimate of |(D(Λ_h))|. Suppose first γ=1 or γ=2 and d≡ 3 4. We want to prove that for any ε>0 there exists C'_ε>0 such that |(D(Λ_h))| ≤ C'_ε· d^ε. In the case γ=1 we can use the description of D(Λ_h) given in the proof of Lemma <ref>, and then proceed as in the proof of Theorem <ref>. Suppose instead that γ = 2 and d ≡ 3 4. Then we can assume that h=2(e+tf)-ω as in Lemma <ref>, with ω=v_1 or ω=v_2. We can then use the description of the discriminant group given in the proof of Lemma <ref> and reason as in the proof of Theorem <ref>. In the last case of ω=v_1+v_2, we will prove that |(D(Λ_h))| ≤ C_ε· d^1+ε. The proof of Lemma <ref> shows that in this case
D(Λ_h) = ⟨1/2dh-1/2v_1,1/2dh-1/2v_2⟩ = ⟨1/2dh-1/2v_1,1/2v_2-1/2v_1⟩
Using this latter set of generators, we can proceed as in the proof of Theorem <ref> and compute that an element of (D(Λ_h)) corresponds to a 2× 2 matrix A=(a_ij) with a_1j∈{0,…,2d-1} and a_2j∈{0,1} subject to the orthogonality condition. In particular, the entries must satisfy
a_11^2 ≡ 0, a_11· a_12≡ 0 d,
and we conclude as in the proof of Theorem <ref>.
§ MODULI SPACES OF ABELIAN SURFACES AND K3 SURFACES
This section consists of two parts. First, we study the irrationality of moduli spaces 𝒜_(1,d) of (1,d)-polarized abelian surfaces. Then we revisit our study in <cit.> about the irrationality of moduli spaces _d of primitively polarized K3 surfaces.
§.§ Abelian surfaces of type (1, d)
Our strategy in bounding the degree of irrationality of 𝒜_(1,d) starts by realizing it as a double cover of an appropriate period space using the construction of Gritsenko and Hulek <cit.>. The construction will provide a bound for the degree of irrationality when d is squarefree. Our main task here is to reduce the general case to this one via a geometric argument by O'Grady <cit.>.
Let ℍ_2 be the Siegel upper-half space of 2× 2 symmetric complex matrices τ with positive definite imaginary part. Consider the usual action of the symplectic group Sp(4,ℚ) on ℍ_2 and the arithmetic group
Γ_1,d = {[ ℤ ℤ ℤ dℤ; dℤ ℤ dℤ dℤ; ℤ ℤ ℤ dℤ; ℤ 1/dℤ ℤ ℤ ]}∩Sp(4,ℚ).
Then we have 𝒜_(1,d) = ℍ_2/Γ_1,d. Consider the Euclidean lattice L=ℤe_1⊕…⊕ℤe_4 where (e_i,e_i) = 1 and (e_i,e_j) = 0 for i≠ j. Then the space of bivectors L∧ L forms a rank six lattice, where the pairing (x,y)∈ℤ is defined by requiring that
x∧ y
= (x,y)· (e_1∧ e_2∧ e_3∧ e_4)
∈ ⋀^4L.
Fix w_d:= e_1∧ e_3+de_2∧ e_4∈ L∧ L and consider the integral paramodular group of level d
Γ_1,d = { g∈GL(L) | (∧^2 g)(w_d)=w_d }.
Then the orthogonal complement Λ_d = (w_d)^⊥⊂ L∧ L has the form
Λ_d≅ U^⊕ 2⊕ℤℓ where
(ℓ,ℓ) = -2d.
The group Γ_1,d acts naturally on the lattice Λ_d via the group homomorphism
Γ_1,d⟶(Λ_d),
g ⟼ (∧^2 g)|_Λ_d.
It turns out that
Γ_1,d
= 𝕀_d^-1Γ_1,d𝕀_d
where 𝕀_d = diag(1,1,1,d). In particular, the group Γ_1,d acts on Λ_d via the homomorphism
Γ_1,d⟶(Λ_d),
g⟼∧^2(𝕀_d g 𝕀_d^-1).
It is proved in <cit.>*Lemma 1.1 that the image of this map lies inside (Λ_d). In the case that d is squarefree, one of the key results in <cit.> asserts the existence of an index two extension
Γ_1,d⊂Γ_1,d^+
together with a map Γ_1,d^+⟶^+(Λ_d)
which induces a birational morphism from 𝒜_(1,d)^+ℍ_2/Γ_1,d^+ to the quotient Ω(Λ_d)/^+(Λ_d). This implies the following statement.
Assume that d is squarefree. Then there exists a dominant morphism of degree two
𝒜_(1,d)⟶^+_Λ_d
= Ω(Λ_d)/^+(Λ_d).
This allows us to bound the degree of irrationality of 𝒜_(1,d) using ^+_Λ_d.
Suppose that d is squarefree. Then, for every ε>0, there exists a constant C_ε>0 independent of d such that
(𝒜_(1,d))
≤ C_ε· d^4+ε.
From Proposition <ref>, we see that
(𝒜_(1,d))
≤ 2(_Λ_d).
Let us bound the latter using Lemma <ref>. Recall that Λ_d ≅ U^⊕ 2⊕(-2d). Since d is squarefree, it is not divisible by 8, and thus can be written as a sum of four coprime squares d=∑_i=1^4 a_i^2. This induces a primitive embedding of Λ_d into Λ_# = U^⊕ 2⊕ A_1(-1)^⊕ 4. As ^+(Λ_d) is always extendable, Lemma <ref> shows that there exists a constant C>0 such that
(𝒫^+_Λ_d)
≤ C · |(D(Λ_d))| · |discΛ_d|^4
= 2^4· C · |(D(Λ_d))| · d^4.
Because D(Λ_d)≅ℤ/2dℤ, we see that for every ε>0, there exists a constant C'_ε>0 such that |(D(Λ_d))| ≤ C'_ε· d^ε. Putting all the inequalities together yields the desired bound.
In order to extend this result to the general case, let us recall a geometric construction due to O'Grady <cit.>. Let (A,L) ∈𝒜_(1,n^2k) be an abelian surface with a polarization of type (1,n^2k). Then the natural map ϕ_L A → A^∨ = Pic^0(A) satisfies Kerϕ_L ≅(/n^2k)^2 and the subgroup of n-torsion points JKerϕ_L[n] is isomorphic to (ℤ/nℤ)^2. Now consider the quotient f A → B=A/J. Then B has a natural polarization M of type (1,k) which satisfies f^*M=L. This yields a map
𝒜_(1,n^2k)⟶𝒜_(1,k).
This map has degree at most n^8.
Let ϕ_M B ⟶ B^∨ be the isogeny induced by M. As in the proof of <cit.>*Proposition 5.1, the image H = f(Kerϕ_L) is a subgroup of B[n] isomorphic to (ℤ/nℤ)^2 such that ϕ_M(H) equals the kernel of f^* B^∨→ A^∨. In particular, since f A → B is the dual of f^* B^∨→ A^∨ = B^∨/ϕ_M(H), we see that we can reconstruct A from ϕ_M and H. This shows that the cardinality of the fiber of (<ref>) over (B,M) is bounded by the number of subgroups H ⊂ B[n] isomorphic to (ℤ/nℤ)^2. Since B[n]≅( ℤ/nℤ)^4, this number is bounded by n^8.
With this we can get a universal bound.
For any ε>0, there exists a constant C_ε>0 independent of d such that
(𝒜_(1,d))
≤ C_ε· d^4+ε
Factor d into primes as d=p_1^2h_1+r_1·…· p_s^2h_s+r_s where h_i≥ 0 and r_i∈{0,1}. Applying Lemma <ref> repeatedly, we get a map
𝒜_(1,d)⟶𝒜_(1,p_1^r_1… p_s^r_s)
whose degree is at most p_1^8h_1… p_s^8h_s.
Since p_1^r_1… p_s^r_s is squarefree, Proposition <ref> gives for each ε>0 a constant C_ε>0 such that
(𝒜_(1,p_1^r_1… p_s^r_s))
≤ C_ε· p_1^4r_1+ε r_1⋯ p_1^4r_1+ε r_1.
Hence, we see that
(𝒜_(1,d))
≤ C_ε· p_1^8h_1+4r_1+ε r_1⋯ p_1^8h_s+4r_1+ε r_1≤ C_ε· d^4+ε.
This completes the proof.
One can improve the above bound for infinitely many series of d as in the case of K3 surfaces treated in <cit.>. Here we consider quadratic series on d. Consider positive integers a,b,c∈ℤ such that 4ac-b^2<0. This defines a negative definite rank two lattice
Q(-1)[ -2a b; b -2c ].
Fix a,b,c∈ℤ which satisfy 4ac-b^2 < 0. Suppose that d is squarefree and has the form d = aX^2-bXY+cY^2 with (X,Y) = 1. Then for any ε>0, there exists a constant C_ε = C_ε(a,b,c) > 0 such that
(𝒜_d)
≤ C_ε· d^2+ε.
The assumption on d implies that there exists a primitive embedding of ℤ(-2d) into Q(-1) and consequently a primitive embedding of U^⊕ 2⊕ℤ(-2d) into U^⊕ 2⊕ Q(-1). Then the proof proceeds as in Proposition <ref>.
§.§ Revisiting the case of K3 surfaces
Let _2d be the moduli space of K3 surfaces with a primitive polarization of degree 2d. Recall that _2d is birational to
𝒫^+_Λ_d
= Ω(Λ_d)/^+(Λ_d)
where Λ_d is the lattice
Λ_d
U^⊕ 2⊕ E_8(-1)^⊕ 2⊕ℤℓ,
(ℓ,ℓ) = 2d.
Let Q(-1) be a negative definite lattice. Then every primitive embedding of ℤ(-2d) into Q(-1) induces a primitive embedding of Λ_d into U^⊕ 2⊕ E_8(-1)^⊕ 2⊕ Q(-1). Since the stable orthogonal group ^+(Λ_d) is extendable with respect to such an embedding, we can use Lemma <ref> to bound (_2d). The general bound <cit.>*Theorem 1.1 is obtained by taking Q(-1)=E_8(-1). The bounds <cit.>*Theorem 1.2 are obtained by choosing Q(-1)=A_2(-1) in the case of associated special cubic fourfolds, Q(-1)=A_1(-1)^⊕ 2 in the case of associated special Gushel–Mukai fourfolds, and
Q(-1) = [ -2 0; 0 -2n ] or [ -2 -1; -1 -n+1/2 ]
in the case of associated special hyperkähler fourfolds. We can get bounds for various series of d by choosing suitable Q(-1).
For any ε>0 there exists a constant C_ε>0 such that for any d not divisible by 8 one has
(ℱ_2d)
≤ C_ε· d^12+ε.
If d is not congruent to 0, 4, 7 8, then
(ℱ_2d)
≤ C_ε· d^23/2+ε.
If a,b,c are positive integers such that 4ac-b^2<0, then
irr(ℱ_2d)≤ C_ε· d^10+ε
for all d of the form d=aX^2-bXY+cY^2 with X and Y coprime.
Since d is not divisible by 8, it can be written as a sum of four coprime squares d = ∑_i=1^4a_i^2. This induces a primitive embedding of ℤ(-2d) into A_1(-1)^⊕ 4
and one can get from Lemma <ref> the first bound. For the second bound, it follows from <cit.>*Korollar 1 (see also <cit.>*Section 2) that if d is large enough and d≢0,4,7 8, then it can be expressed as the sum of three coprime squares. In particular, there exists a primitive embedding ℤℓ↪ A_1(-1)^⊕ 3 which leads to the second bound using Lemma <ref>. In the last case, we have a primitive embedding of ℤ(2d) into the lattice Q(-1) defined by (<ref>), which yields the desired bound from Lemma <ref>.
When a=b=1, the condition of being able to write d in the form X^2-XY+Y^2 with (X,Y)=1 is equivalent to the condition 2d≡ 0,2 6 and not divisible by 4, 9 or any odd prime p≡ 2 3. This is the standard necessary and sufficient condition for a polarized K3 surface of degree 2d to admit an associated labelled special cubic fourfold; see <cit.> or <cit.>*Theorem 23.
alpha
|
http://arxiv.org/abs/2307.04156v2 | 20230709115859 | Brylinski-Radon transformation in characteristic $p>0$ | [
"Deepam Patel",
"K. V. Shuddhodan"
] | math.AG | [
"math.AG",
"14F10, 14F20, 14F43, 14F45"
] |
Department of Mathematics, Purdue University,
150 N. University Street, West Lafayette, IN 47907, U.S.A.
[email protected]
Institut des Hautes Études Scientifiques, Université Paris-Scalay, CNRS, Laboratoire Alexandre Grothendieck, Le Bois-Marie 35 rte de Chartres, 91440 Bures-sur-Yvette, France
[email protected]
KVS was supported by the CARMIN project fellowship.
In this article, we characterize the image of the Brylinski-Radon transform in characteristic p>0 via Beilinson's theory of singular supports. We also provide an alternate proof of Brylinski's results over , which also works for sheaves with finite coefficients. Along the way, we also obtain a microlocal criterion for the descent of perverse sheaves which could be of independent interest.
Brylinski-Radon transformation in characteristic p>0
K.V. Shuddhodan
August 12, 2023
====================================================
§ INTRODUCTION
In <cit.>, Brylinski introduced topological (and geometric) versions of the classical Radon transforms and proved some fundamental properties for these transforms. The theory has had numerous applications including to Lefschetz theory <cit.>, <cit.>. More recently the Radon transform was crucially used by Beilinson <cit.> and Saito <cit.> respectively in the construction of singular support and characteristic cycle for constructible sheaves in the algebraic setting over arbitrary perfect fields. The main result of this article is to use the theory of singular supports to characterize the image of the Radon transform, generalizing the work of Brylinski to arbitrary base fields (and finite coefficents). In particular, we answer a question raised by Brylinski <cit.>.
§.§ Summary of results:
§.§.§ Singular supports of étale sheaves:
Let k be an algebraically closed field of char. p ≥ 0, ℓ≠ p a fixed prime, Λ = /l^n, and ^b_ctf(X,Λ) denote the derived category of bounded constructible étale sheaves of Λ-modules with finite tor-dimension on X. In the rest of the article, we denote this category simply by ^b_c(X). Given K ∈^b_c(X), K(n) will denote the usual Tate twist of K. If X is smooth and K ∈^b_c(X), then Beilinson <cit.> defined the singular support SS(K) ⊂^*X (see <ref> for a brief summary about singular supports). This is a closed conical subset of ^*X, and for K ≠ 0, SS(K) is equidimensional of dimension equal to (X). Moreover when char(k)=0, SS(K) is Lagrangian[A closed conical subset of ^*X is said to be Lagrangian if the smooth locus of the closed subset is both isotropic and involutive with respect to the natural symplectic structure on ^*X.] <cit.>. However this fails in positive characteristic <cit.>.
§.§.§ Main Result:
Let (X) ⊂^b_c(X) denote the abelian category of perverse sheaves (w.r.t middle perversity). In the following, given an object ∈^b_c(X), let ^i(K) ∈(X) denote the i-th perverse cohomology sheaf. If X is smooth[In this article, varieties smooth over k shall be assumed to be connected.] over k of dimension n, let (X) ⊂(X) denote the full Serre subcategory of locally constant perverse sheaves (i.e. complexes of the form [n] where is a locally constant constructible sheaf on X), and (X) the corresponding quotient category. One can realize (X) as the heart of the induced perverse t-structure on a localized triangulated category ^b_c(X)_T obtained by localizing ^b_c(X) along the multiplicative set of morphisms f such that ker(^i(f)) and coker(^i(f)) are locally constant perverse sheaves for all i. As above, let _T^i(K) ∈(X) denote the i-th `perverse cohomology' sheaf of the image of K in ^b_c(X)_T.
We now recall the Brylinski-Radon transform. Let ℙ denote projective space of dimension n ≥ 2 over k, and Y := (d) denote the denote the Grassmanian of d-planes (where 1 ≤ d ≤ n-1) in ℙ. Consider the incidence correspondence Q ⊂ℙ× Y. The Brylinski-Radon transform is defined as follows. Consider the diagram:
Q [dr]^p_2[dl]_p_1
Y,
where p_i are the natural projections.
Given ∈^b_c(^n), let ℛ():= p_2,*p_1^†∈^b_c(Y)[Give a smooth morphism f:X → S of relative dimension d with geometrically connected fibers, we set f^†:=f^*[d]. Also, by f^†(S), we mean the full subcategory (X) consisting of perverse sheaves of the form f^†M <cit.>, where M is a perverse sheaf on S.]. Similarly, we set () := p_1,*p_2^†().
Let C ⊂^* be a closed conical subset. The Brylinski-Radon transform of C is defined to be p_2∘p_1^∘(C) (see Section <ref> for the notation). A closed conical subset of ^*Y is said to be in the image of the Brylinski-Radon transform if it is contained in the Brylinski-Radon transform of a closed conical subset of ^*.
It follows from <cit.> that perverse sheaves whose singular support is in the image of the Brylinski-Radon transform form an abelian subcategory of (Y) (denoted below by (Y)_Rad) which is stable under extensions. Let (Y)_Rad be the full abelian subcategory of (Y), consisting of objects which are images of ∈(Y)_Rad. It is easy to see that both and naturally induce functors between ^b_c()_T and ^b_c(Y)_T. We are now ready to state the main result of this article.
With notation as above:
* is t-exact for the perverse t-structures on ^b_c()_T and ^b_c(Y)_T.
* The functor _T^d(n-d-1)∘(d(n-d)) ∘_T^0∘ is naturally equivalent to the identity functor on ().
* The functor ^0_T ∘ induces an equivalence of categories between () and (Y)_Rad.
§.§ Comparison with previous work
* If k =, and one considers constructible sheaves in the classical topology with coefficients, then this is one of the main results of Brylinski <cit.>. The problem of a characteristic p analogue of Brylinski's theorem was already posed as a question by Brylinski <cit.>. The results of this article answer this question in the affirmative albeit, with appropriate modifications to account for wild ramification.
* If char(k)=0, <cit.> and <cit.> imply that one can alternatively describe (Y)_Rad as those perverse sheaves who singular support is contained in p_2∘p_1^∘^*. In particular, the statement of Theorem <ref> is consistent with the analogous statement proved by Brylinski over the complex numbers.
* If d = n-1, then the aforementioned theorem gives an equivalence of categories between () and (). In this setting, Brylinski <cit.> also proves the result over an algebraic closure of a finite field as an application of the Deligne-Fourier transform in characteristic p>0.
§.§ Idea of the proof
In this section we briefly describe the ideas underlying the proof of Theorem <ref>.
§.§.§ Proof of Theorem <ref>, (1):
The proof is an easy application of Artin vanishing and is along the lines of the proof in <cit.>, where the case of n=d-1 is handled. The proof in <cit.> is in comparison microlocal in nature and does not carry over when the coefficients are finite.
§.§.§ Proof of Theorem <ref>, (2):
The essential point here is to understand the pushforward of the constant sheaf along the map Q ×_Y Q →×. This map is smooth outside the diagonal , however the fibers of the map are Grassmannians. This allows us to compute ^∨∘ in the localized category ^b_c()_T (see (<ref>)) and deduce Theorem <ref>, (2). We do so without recourse to the decomposition theorem which is technically important for us since we allows finite coefficients. As a corollary of the proof we are able to show that ^0∘ is fully faithful and induces isomorphism on Ext^1 (see Corollary <ref>).
§.§.§ Proof of Theorem <ref>, (3):
The proof of Theorem <ref>, (3) constitutes the technical heart of the paper. The first step is to prove a microlocal criterion for the descent of perverse sheaves. More precisely we prove the following statement which generalizes a result of Laumon[We thank Ahmed Abbes for pointing out the connection of our result with Laumon's work.] <cit.>. Let k be a perfect field and S/k a smooth variety. Let f: X → S a proper and smooth morphism with geometrically connected and simply connected fibers.
Then a non-zero perverse sheaf L on X is of the form f^†M iff SS(K) ⊂ f^∘Λ, for a closed conical subset Λ⊂^*S of dimension equal to dim(S). Moreover when char(k)=0, it suffices to assume that SS(K) ⊂ f^∘^*S.
Using this descent criterion and an inductive argument (see Proposition <ref>) we are able to show that simple objects in (Y)_Rad are in the image of the Radon transform. The inductive nature of our method naturally leads us to consider relative versions of Brylinski-Radon transforms and we develop the necessary background in Section <ref>. The base case (i.e. n-d=1) for the induction follows from the work of Laumon <cit.>. Finally using the isomorphism on Ext^1 (Corollary <ref>) we deduce Theorem <ref>, (3).
We would like to note that our proof of Theorem <ref> also applies to ℓ-adic étale sheaves using the notion of singular support for ℓ-adic sheaves as described in <cit.>. It also works when k= and one considers algebraically constructible sheaves in the analytic topology with Kashiwara-Schapira's <cit.> definition of singular supports.
Acknowledgements:
We would like to thank Ahmed Abbes for his interest and encouragement during the course of this project. KVS would like to thank Ofer Gabber and Ankit Rai for useful conversations. In particular, he is thankful to Ofer Gabber for presenting a counterexample to an optimistic form of Corollary <ref>, ultimately resulting in the formulation of Proposition <ref>. KVS would also like to thank Hiroki Kato for patiently answering his questions about sensitivity of vanishing cycles to test functions in positive characteristics.
§ BACKGROUND AND SOME PRELIMINARY OBSERVATIONS
§.§ Recollection of singular support
Let X be a smooth variety over a perfect base field k. Let C ⊂^*X denote a closed conical subset, and h: U → X a morphism with U smooth. Then h is said to be C-transversal if for all geometric points u of U,
ker(dh_u) ∩ C_h(u)∖{0} = ∅.
Note C-transversality implies that dh|_C ×_X U is finite and Beilinson defines h^∘(C) to be its image in
^*U, also a closed conical subset <cit.>. In particular, h^∘ always makes sense when h is a smooth morphism (since such morphisms are automatically C-transversal for any C). This will be the only relevant case for us. Similarly, for any closed conical subset C ⊂^*U whose base is proper over X, Beilinson defines h_∘(C) to be the image of dh^-1(C) under the natural projection ^*X ×_X U →^*X. This is a closed conical subset of ^*X.
For any sheaf K ∈^b_c(X), Beilinson defines the singular support SS(K) ⊂^*(X). We recall some properties of SS(K) which will be used in the following.
* For K ≠ 0, SS(K) is a equidimensional closed conical subset of ^*(X) of dimension equal to (X) <cit.> .
* Given an SS(K)-transversal morphism h: U → X, SS(h^*K) ⊂ h^∘(SS(K)) <cit.>. Moreover, one has equality if h is a smooth morphism <cit.>.
* Suppose f X → Y is a proper morphism of smooth varieties, then for any sheaf K on X, SS(f_*K) ⊂ f_0(SS(K)) <cit.>.
* SS(K) is the zero section (denoted below by 0_^*X) iff ℋ^i(K) are locally constant for all i and atleast one of them is non-zero <cit.>.
* For any sheaf K one has SS(K)=⋃_αSS(K_α), where K_α runs over the various Jordan-Holder components of ^i(K) for every i <cit.>.
We record the following standard lemma for use below.
Let X Y Z be smooth proper morphisms of smooth varieties over an algebraically closed field k.
* Given a conic Λ⊂^*X, (g_∘∘ f_∘)(Λ) = (g ∘ f)_∘(Λ).
* Given a conic Λ⊂^*Z, (f^∘∘ g^∘)(Λ) = (g ∘ f)^∘(Λ).
* Given a conic Λ⊂^*Y, (f_∘∘ f^∘)(Λ) = Λ.
* Consider a commutative square:
X [r]^f[d]^g Y [d]^g'
Z [r]^f' W
where all morphisms and varieties are smooth proper. Then, given Λ⊂^*Z, one has
((g')^∘∘ f'_∘)(Λ) =(f_∘∘ g^∘)(Λ)
The first three parts of the lemma are immediate from the definition. Using (3) we can reduce (4) to the case when the diagram is cartesian in which case the lemma is clear.
§.§ Relative Brylinski-Radon transform
In what follows, we shall fix a base scheme S which is assumed to be smooth over an algebraically closed field k.
Let be a vector bundle over S of rank n+1 ≥ 2. Let 0 ≤ d ≤ n-1 be an integer. We denote by (d,) the Grassmannian bundle parametrizing locally free quotients of ^∨ of rank d+1. In particular, given an S-scheme π: T → S, (d,)(T) consists of equivalence classes of quotients π^*^∨→ where is locally free of rank d+1. We denote by π_d, the canonical morphism from (d,) to S. It is a proper and smooth morphism of relative dimension (d+1)(n-d).
Note that we may identify (d,) with (n-d-1,^∨) by passing to duals.
Below, when working over S = (k) (where k is algebraically closed), we denote by (d,n)[We use the convention that (d,n)=∅ if d is negative.] the Grassmanian of d+1-planes in V = k^n+1. We shall also sometimes identify the latter with the d-planes in ^n.
The following decomposition theorem is well-known, and is recorded here for future use.
For any K ∈^b_c(S), there exists a functorial (in K) isomorphism
⊕_i=0^n K⊗ R^2iπ_d,Λ[-2i](i) ≃π_d,*π_d,^*K
Using the projection formula we may assume that K=Λ. In this case the result is a consequence of proper base change and <cit.> owing to the cohomology of Grassmannian satisfying hard Lefschetz (even with torsion coefficients).
§.§.§ The incidence correspondence as a Grassmannian bundle:
Given a pair of integers 0 ≤ d_1 < d_2 ≤ n-1, we denote by Q_d_1,d_2,⊂(d_1,) ×_S (d_2,) the incidence correspondence. More precisely, given a test scheme T as above, recall that an element of (d_1,) ×_S (d_2,)(T) is given by a tuple (upto equivalence) (π^*^∨→_1,π^*^∨→_2) where _i is a rank d_i +1 quotient. With this notation, Q_d_1,d_2,(T) consists of tuples such that there is a surjection _2 →_1 compatible with the maps π^*^∨→_i. Note that if such a surjection exists, it is unique. Moreover, this is a closed subscheme of (d_1,) ×_S (d_2,).
Let 0 →_n-d,^∨→π^*_d,^∨→_d+1,^∨→ 0 denote the universal exact sequence on (d,). Here
_n-d,^∨ (resp. _d+1,^∨) is the universal sub-bundle of rank n-d (resp. quotient of rank d+1). With this notation, one can identify Q_d_1,d_2,(T) as the rank n-d_2 quotients of π_T^*(_n-d_1,^∨^∨), and in particular, we may view Q_d_1,d_2,→(d_1,) as the Grasmmannian bundle (n-d_2-1,_n-d_1,^∨^∨). By the aforementioned remark, we may also view this as the Grassmannian bundle (d_2-d_1-1,_n-d_1,^∨). In a similar manner, we may view the incidence correspondence as a Grassmannian bundle (d_1, _d_2+1,^∨^∨) over (d_2,).
We denote by p_d_1,d_2, (resp. p^∨_d_1,d_2,, resp. π_d_1,d_2,) the induced map from Q_d_1,d_2, to (d_1,) (resp. (d_2,), resp. S). As noted above p_d_1,d_2, (resp. p^∨_d_1,d_2,) is a Grassmannian bundle parametrizing locally free quotients of rank d_2-d_1 (resp. of rank d_1+1) of a vector bundle of rank n-d_1 (resp. of rank d_2+1) on (d_1,) (resp. (d_2,)). Thus p_d_1,d_2, (resp. p^∨_d_1,d_2,) is proper and smooth of relative dimension
(d_2-d_1)(n-d_2) (resp. (d_1+1)(d_2-d_1)).
§.§.§ Brylinski-Radon transform
We define functors _d_1,d_2,:^b_c((d_1,)) →^b_c((d_2,)) and _d_1,d_2,:^b_c((d_2,)) →^b_c((d_1,)) as follows,
_d_1,d_2,(K):=p^∨_d_1,d_2,*p^†_d_1,d_2,K
and
_d_1,d_2,(L):=p_d_1,d_2,*p^∨†_d_1,d_2,L.
Finally, we make explicit a condition on closed conical subsets of ^*(d,) (resp. ^*Q_d_1,d_2,) which will be important in the following[See Example <ref> for a motivation to consider the condition (∗).].
We will say that a closed conical subset C ⊂^*(d,) (resp. ^*Q_d_1,d_2,) is regular over S (or just regular if S is clear from context) if the following condition is satisfied:
* Every irreducible component Λ of C contained in π_d,^∘^*S (resp. π_d_1,d_2,^∘^*S) is of the form π_d,^∘Λ' (resp. π_d_1,d_2,^∘Λ') for an irreducible closed conical subset Λ' ⊂^*S.
Note that condition (∗) above is trivially satisfied when S=(k) and C is of pure dimension of dimension equal to dim((d,)) (resp. dim(Q_d_1,d_2,)).
Let C⊂^*() be a closed conical subset. We denote by
Rad_0,d,(C):=(p^∨_0,d,)_∘(p_0,d,)^∘(C),
the Radon transform of C with respect to R_0,d,. This is a closed conical subset of ^*(d,).
Let q_0,d, and q_0,d,^∨ denote the morphism from ^*_Q_0,d,(()×_S(d,)) to ^*() and ^*(d,) respectively. We need the following, which is the relative version of <cit.>, and follows from it.
Let q̇_0,d, and q̇_0,d,^∨ respectively be the induced morphisms from ^*_Q_0,d,(()×_S(d,)) \π_0,d,^∘^*S to ^*() \π_0,^∘^*S and ^*(d,) \π_d,^∘^*S. Then
* q̇_0,d, is smooth and proper of relative dimension d(n-d-1).
* q̇_0,d,^∨ is a closed immersion.
As a consequence we have the following.
Let C ⊂^*Q_0,d, be a closed conical subset. Suppose C=p_0,d,^∘C_1=p_0,d,^∨∘C_2 for closed conical subsets C_1 and C_2 in ^*() and ^*(d,) respectively. Then C ⊂π_0,d,^∘^*S.
Note that by Remark <ref> the above corollary is also true for correspondences between (d,) and (n-1,) with d<n-1.
Let M be perverse sheaf on Q_d,n-1, (with d<n-1) that belongs to both p_d,n-1^†((d,)) and p_d,n-1^∨†((n-1,)). Then SS(M)⊂π_Q_d,n-1,^∘^*S.
The corollary is an immediate consequence of the remark above and Section <ref>, (2).
We also note the following corollary.
Let C ⊂^*() be a closed conical subset regular over S. Then Rad_0,d,(C) is also regular over S.
Let Λ⊂ C be an irreducible component of the form π_0,^∘Λ'. Then Lemma <ref> implies that Rad_0,d,(Λ)=π_d,^∘Λ'. On the other hand if Λ is not contained in π_0,^∘^*S, then Lemma <ref> implies that Rad_0,d,(Λ) is an irreducible component of Rad_0,d,(C) and is not contained in π_d,^∘^*S.
§ PROOF OF THEOREM 1: PRELIMINARY RESULTS
In this section, we collect some results which will be used in the following for the proof of part (3) of Theorem <ref>.
§.§ A criterion for descent of perverse sheaves.
As before, let k be an algebraically closed field, S/k be a smooth variety and let f X → S be a smooth morphism whose fibres are connected of dimension d.
In general, it is hard to characterise the subcategory f^†(S) of (X). If, in addition to the above assumptions, f is proper and the fibres of f are simply connected, then we have the following descent criterion.
[Our proof also works when k is only assumed to be perfect, provided f is geometrically connected.]
A (non-zero) simple perverse sheaf K ∈(X) is in the essential image of f^† iff SS(K) ⊆ f^∘Λ, for some closed conical subset Λ⊂ T^*S of dimension equal to dim(S). Moreover, when char(k)=0 it suffices to assume that SS(K) ⊂ f^∘T^*S.
Since f is smooth, the necessity results from the preservation of singular supports under pullback (see Section <ref>, (2)). Suppose now that K is a (non-zero) simple perverse sheaf on X such that as in SS(K) ⊆ f^∘Λ, with Λ as in the proposition. Since K is simple, there exists a triple (X',U,) consisting of an irreducible closed subset X' i X, a non-empty smooth (over k) open subset U j X' and a non-zero irreducible local system on U such that K=i_*j_!*[dim(X')] <cit.>. Note that f^∘ preserves irreducible components since f is smooth. As a consequence, by removing any extra components (if necessary), we may assume that SS(K)=f^0Λ.
Claim 1: It is sufficient to prove the theorem after replacing S by an open dense subset S' j' S, X by X_S' := X ×_S S', and K by K|_X_S' provided K|_X_S' is non-zero.
Proof: Let j”: X_S'↪ X denote the resulting open immersion. First note that the resulting map f':X_S'→ S' satisfies the hypotheses of the theorem, and SS(K|_X_S') = SS(K)|_X_S' = (f')^∘(Λ|_S') (Section <ref>, (2)). If M is a simple perverse sheaf on S' such that (f')^†M = K|_X_S', then f^†j'_!*(M) = j”_!*((f')^†M) = j”_!*(K|_X_S') = K. Here the first equality follows from the fact that intermediate extensions commute with pull back along smooth morphisms <cit.>, and the last follows from the fact that K is a simple perverse sheaf.
Claim 2: We may assume that the base S' of Λ is smooth, X' = X ×_S S', and SS(K) = f^∘(Λ).
Proof: Let S' be the base of Λ. Since the base of SS(K) equals the support of K <cit.> we have X'=f^-1(S'). Let Z denote the singluar locus of S'. Since k is algebraically closed, this is a strict closed subset of S'. In particular, S ∖ Z is open, and by the previous claim, we may base change everything to S ∖ Z.
Claim 3: Let Λ' be an irreducible component of f^∘Λ which is not equal to ^*_X'X, the conormal bundle of X' in X. Then the base of Λ' does not dominate S'. In particular, the union of the bases of the components of SS(K) not equal to ^*_X'X (denoted by X” below) cannot dominate S' under f.
Proof: Let Z ⊂ X' be the base of Λ'. We claim that Z does not dominate S' under f. First note that, if Z ≠ X', then it does not dominate S'. We're reduced to showing that if Z = X', then Λ' = ^*_X'X. Since X' is smooth and K=i_*j_!*[dim(X')], SS(K)=i_0SS(j_!*[dim(X')]) (Combine <cit.> and <cit.>). Note that i_0 preserves bases of irreducible components, and there exists a unique component of SS(j_!*[dim(X')]) whose base equals X' (namely the zero section). It follows that there is a unique component of SS(K) whose base is X' (namely ^*_X'X).
Note that X”=f^-1(f(X”)) ⊊ X. Let U'=X' \ X”, then f|_U' U' → S' \ f(X”) is a proper morphism with connected and simply connected fibres. Thus by <cit.> there exists a local system on S' \ f(X”) such that f|_U'^*=. Thus by uniqueness K=f^*(i_S'*j_U'!*([dim(S)]), here i_S' (resp. j_U') are the immersions from S' (resp. U') into S (resp. S').
Now suppose char(k)=0, then every irreducible component (say Λ̃) of SS(K) is Lagrangian <cit.> and further the smooth locus of Λ̃ is the conormal to the smooth locus in the intersection of Λ̃ with the zero section of ^*X (<cit.>, Exercise in Section 1.3). Such a component Λ̃ is in f^0^*S iff it is the inverse image of a closed conical subset of ^*S.
It follows from the proof of Proposition <ref> that even in positive characteristic, as long as the components of the singular support are conormals (and not just Lagrangians!), the apparently weaker assumption SS(K) ⊂ f^∘T^*S suffices.
While the following corollary will not be used in what follows, we record it here since it may be of independent interest.
Let f X → S and K be as in Proposition <ref>. Then K is lisse iff ^df_*K is lisse.
We continue using the notation from Proposition <ref>. We record below an example which shows that if char(k)>0, it is in general not sufficient to assume SS(K) ⊂ f^0T^*S.
Let k be a perfect field of characteristic p>0. Let S=^1_s, X=^1_s ×^1_[t:t'][We use subscripts to denote a choice of a coordinate system], and f X → S the projection map. Let X̃:=Z(t'^p(x^p^2-x)-(s+x^p)t^p) ⊆^1_x×^1_s ×^1_[t:t'] and denote by π: X̃→ X the induced map. We denote by X̃_t ≠ 0 (resp. X_t ≠ 0) and X̃_t' ≠ 0 (resp. X_t' ≠ 0) the open cover of X̃ (resp. X) obtained from the usual cover on ^1_[t:t'].
Note that X̃ is a smooth surface over k and that π is finite étale of rank p^2 over X_t' ≠ 0. Over the line t'=0, it is a totally ramified cover of ^1_s. Thus π is finite. and we denote by K=π_*(Λ[2]) and thus by Section <ref>, (3), SS(K) ⊆π_∘(0_^*X̃).
It follows from the definition of π_∘ that π_∘(0_^*X̃)=0_T^*X∪Λ. Here Λ is f^∘^*^1_s|_t'=0. By proper base change K is not a lisse perverse sheaf, hence SS(K)=0_^*X∪Λ. Moreover, K is not the pullback of a perverse sheaf from ^1_s, since if that were the case then its restriction to s=0 would have to be trivial by proper base change. This in turn implies that the finite étale cover X̃_t' ≠ 0→ X_t' ≠ 0 is trivial restricted to s=0, which is not the case by the choice of the Artin-Schrier cover.
§.§ A key proposition
In this section, we prove a key proposition which will be used in the proof of Theorem <ref>, (3). Recall we have a base scheme S smooth over k ( assumed to be algebraically closed) and a vector bundle on S of rank n+1. We continue using the notations from Section <ref>. However, for ease of exposition, we drop from the notation. In particular we shall denote (0,) by , (d,) by (d) and (n-1,) by (n-1).
Below, we shall makes use of the following commutative diagram in order to facilitate an inductive argument.
Q_0,d,n-1[rr] [dr] @.>[dd] Q_d,n-1[dd] [dr]
Q_0,d[rr] [dd] (d) [dd]
Q_0,n-1[rr] [dr] (n-1) [dr]
[rr] S.
In diagram (<ref>), the bottom, front and right hand side faces are the correspondences described in Section <ref>. We define Q_0,d,n-1 := Q_0,d×_(d) Q_d,n-1. This induces a morphism from Q_0,d,n-1 to ×_S (n-1), which by construction factors through Q_0,n-1 (denoted in the diagram (<ref>) by the dotted arrow). We have the following lemma which follows from the description of the incidence correspondence as a Grassmannian bundle in Section <ref>.
There exists isomorphisms (as (n-1)-schemes) Q_0,n-1≃(^∨_n,^∨), Q_d,n-1≃(d,^∨_n,^∨) and Q_0,d,n-1≃ Q_0,d,^∨_n,^∨ such that commutative square
Q_0,d,n-1[r] [d] Q_d,n-1[d]
Q_0,n-1[r] (n-1),
in diagram (<ref>) is the one induced by the correspondence Q_0,d,^∨_n,^∨⊂(^∨_n,^∨) ×_(n-1)(d,^∨_n,^∨).
Note that Q_0,d,n-1=Q_d,n-1×_((d) ×_S(n-1))(Q_0,d×_S(n-1)). Thus in order to prove the lemma, it suffices to show that projective sub-bundle of ×_S(n-1) defined by Q_0,n-1 induces the Grassmannian sub-bundle Q_d,n-1 of (d)×_S(n-1). But this follows from the description in Section <ref>.
More precisely using the notations from the section, Q_0,n-1 is the projective bundle (over (n-1)) defined by the sub-bundle ^∨_n,^∨ of π_n-1,^* and Q_d,n-1 is the Grassmannian bundle (d,^∨_n,^∨).
In what follows we denote the vector bundle ^∨_n,^∨ on (n-1) by . In particular there is a Radon transform (denoted by _0,d,) from ^b_c(Q_0,n-1) to ^b_c(Q_d,n-1). The following lemma is an immediate consequence of proper base change applied to the cartesian square at the top of Diagram (<ref>).
For any perverse sheaf K on we have ^i_0,d,(p_0,n-1^†K) p^†_d,n-1^i_0,d(K)[Here and in the rest of this article by ^i_d_1,d_2, we mean ^i∘_d_1,d_2,. We use a similar convention for ^∨_d_1,d_2,.] in (Q_d,n-1).
Below, for X smooth over k and Λ⊂^*X is a conical subset, then
(X,Λ) is the full subcategory of the category of perverse sheaves K such that SS(K) ⊂Λ. Note that this is is a Serre subcategory (see Section <ref>, (5)).
Let C ⊂^* be a closed conical subset equidimensional of dimension equal to dim(). For the rest of this section, we assume that closed conical subsets are regular over the base S (see Definition <ref>).
With notation as above, any simple perverse sheaf L in ((d),Rad_0,d(C)) is either in π_d^†((S)) or there exists a simple perverse sheaf K on and a (decreasing) filtration F^·^0_0,dK on ^0_0,dK such that
* SS(K) ⊆ C.
* F^iR^0_0,dK=R^0_0,dK for i ≤ 0.
* F^iR^0_0,dK=0 for i ≥ 3.
* Gr^i_F(^0_0,dK) belongs to π_d^†(S) for i=0,2 and Gr^1_F(^0_0,dK)=L.
We may assume L does not belong to π_d^†((S)). We prove the claim by descending induction on n-d (over varying choices of (S,)). Suppose n-d=1 and hence (d)=(^∨). Then (b)-(d) follow immediately from <cit.>. Moreover, (a) follows from the fact that K is in fact a sub-quotient of _0,n-1(L).
Now suppose the Proposition has been verified for n-d=r ≥ 1 and for all possible choices of (S,). We shall now prove it for n-d=r+1 by induction via Diagram (<ref>). By the induction hypothesis, we may assume that the Proposition has been verified for _0,d,.
It follows from <cit.> that L_:=p_d,n-1^†L is simple and by Section <ref>, (2) that SS(L_)=p_d,n-1^∘SS(L). Thus by Lemma <ref>, SS(L_) is contained in the Radon transform of p_0,n-1^0C with respect to R_0,d,. Moreover by Corollary <ref> it follows that L_ is not in the essential image of p^∨†_d,n-1((n-1)). Now by induction hypothesis there exists a simple perverse sheaf K_ on Q_0,n-1 with and a filtration F^·_^0_0,d, such that
* SS(K_) ⊆ p_0,n-1^0C.
* F_^iR_0,d,^0K_=R_0,d,^0K_ for i ≤ 0.
* F_^iR_0,d,^0K_=0 for i ≥ 3.
* Gr^i_F_(R_0,d,^0K_) belongs to p_d,n-1^∨†((n-1)) for i=0,2 and Gr^1_F_(R_0,d,^0K_)=L_.
Now using Proposition <ref>, (a') above implies that K_ descends to a simple perverse sheaf K on such SS(K) ⊆ C. Moreover by Lemma <ref>, R_0,d,^0K_ is in the essential image of p_d,n-1^†((d)). Thus by <cit.> so are Gr^i_F_(R_0,d,^0K_) for all i. Thus by Corollary <ref> and Proposition <ref>, Gr^i_F_(R_0,d,^0K_) for i=0,2 belongs to (π_d∘ p_d,n-1)^†(S). Hence the result.
§ PROOF OF THEOREM <REF>, (1)
In the rest of this article we work over S=(k), with a vector space over k of dimension n+1 (which we henceforth ignore from the notation) and use the following notation.
We will only consider the Brylinksi-Radon transform between to (d).
* We will denote (d) by Y and the incidence correspondence Q_0,d by Q. The projections from Q to (resp. Y) are denoted by p_1 (resp. p_2).
* The morphism from (resp. Y) to (k) are denoted by π_ (resp. π_Y).
* The Brylinski-Radon transforms are denoted by and .
* Let E be the complement of the incidence variety Q ⊂× Y. Let p_1^∘ and p_2^∘ be the projections to and Y respectively from E.
* In what follows we will need the modified Brylinski-Radon transform defined as _!K :=p^∘_2!p_1^∘†K.
* For a complex K on , by we mean the complex π_*K on (k). Similarly, for complexes K on Y.
* We will use ^i(K) (resp. _!^i(K), ^i) to denote the i^th perverse cohomology of (K) (resp. _!(K), ).
§.§ Some preliminary observations
The next two lemmas are immediate consequences of the smoothness and properness of p_1 and p_2, and we state them without a proof.
For any sheaf K ∈^b_c() and L ∈^b_c(Y), D((K)) ≃(DK)(d(n-d)) and D((L))=(DL)(d) [Here D is the Verdier duality functor.].
The functors ([δ](d(n-d)),,[-δ](d))[In what follows we set δ:=d(n-d-1)] form an adjoint triple.
The following result is due to Brylinski <cit.>. Again, while this is proved in loc. cit. in the complex analytic setting, the same proof goes through in our setting.
Let and be as before. Then and preserve the localizing set T (see Section <ref>), and in particular one has induced functors : ^b_c()_T →^b_c(Y)_T and : ^b_c(Y)_T →^b_c(^n)_T.
§.§ An application of Artin vanishing
We now record the following easy consequence of Artin vanishing which is used in the proof of Theorem <ref>, (1).
Let X/k be a base scheme. Let U be the complement in ^n_X of a linear subspace[A linear subspace of ^n_X is a closed subscheme, which Zariski locally over X isomorphic to ^d_X ⊂^n_X embedded linearly.] Z of relative dimension d, and let π be the map from U to X. Then π_* maps ^p^≤ 0(U) to ^p^≤ n-d-1(X).
The proof is via a repeated application of Artin vanishing in the form of right t-exactness (for the perverse t-structure) of affine morphisms <cit.>. After replacing X with a suitable Zariski open we can consider a chain of linear subspaces Z_0 ⊊ Z_1 ⊊⋯ Z_n-d-1 of Z such that Z_0=^d_X and dim(Z_i)=d+i. Let U_i :=^n_X \ Z_i be the corresponding open subscheme. Let π_i be the map from U_i onto X, and we identify π_0 with π.
We prove the lemma by descending induction on i. For i=n-d-1 the lemma is an immediate consequence of Artin vanishing <cit.>. Assuming that the lemma has been verified up to some i ≤ n-d-1, we prove it for i-1. Let j (resp. l ) be the inclusion of U_i (resp. Z_i \ Z_i-1) inside U_i-1. Let K be a sheaf on U_i-1 in ^p^≤ 0(U_i-1). By induction hypothesis π_*(j_*j^*K) ∈ ^p^≤ n-d-1-i(X). Thus it suffices to show π_*(l_*l^!K) ∈ ^p^≤ n-d-i(X).
By construction Z_i \ Z_i-1 is at once affine over X and a complete intersection of codimension n-d-i in U_i-1, and thus <cit.> implies the result.
The following corollary will be used below to describe the image of the Brylinski-Radon transform.
With notation as above, p^∘_2! maps ^p^≥ 0(E) to ^p^≥ -(n-d-1)(Y).
§.§ Proof of <ref>, (1) and Corollaries
In fact, we prove the following more refined version of Theorem <ref>, part (1).
Let K be a sheaf on .
* If K is upper semi-perverse then for any i<0, we have ^i(K) ≃π_Y^†^i-n+d.
* If K is perverse, ^i(K) are constant for any i ≠ 0. Also the perverse sheaves ^i_!(K) are constant for i ≠ n-d+1.
* Consequently is t-exact for the perverse t-structures on ^b_c()_T and ^b_c(Y)_T (see Section <ref>).
By definition of (and _!) and proper base change, we have a triangle on Y
(K)[n-d-1] [r] _!K [r] π_Y^*[(d+1)(n-d)] [r]^-+1 .
Now, by Corollary <ref> and <cit.>, one has that for any K ∈ ^p^≥ 0(), _!K∈ ^p^≥ -(n-d-1)(Y). Taking the long exact sequence of perverse cohomologies associated to the triangle (<ref>) gives us (1).
If K is perverse, then applying the first part to DK and using Lemma <ref> we deduce (2). The constancy of _!^i(K) for i ≠ n-d+1 then follows from the fact that constant sheaves form a Serre subcategory. The t-exactness of is now clear.
We get the following corollaries by combining Lemma <ref> and Theorem <ref>.
The functor [-δ](d) (resp. [δ](d(n-d))) is left t-exact (resp. right t-exact) for the perverse t-structures on ^b_c(Y)_T and ^b_c()_T.
(^δ(d(n-d)),^0, ^-δ(d))[We denote ^i_T ∘ R by ^i and a similar notation for ^i.] form an adjoint triple between () and (Y). Moreover ^-δ(d) (resp. ^δ(d(n-d))) is left t-exact (resp. right t-exact).
§ PROOF OF THEOREM <REF>, (2) AND (3)
In this section, we prove Theorem <ref>, (2) and (3).
§.§ Proof of Theorem <ref>, (2) and corollaries
Consider the following diagram of schemes, where the central square is cartesian by definition:
Q ×_Y Q [dl]_-_2[dr]^-_2
Q [dl]_-p_1[dr]^-p_2 Q [dl]_-p_2[dr]^-p_1
Y .
Let π: Q ×_Y Q →× denote the morphism induced by p_1 on each factor. Let s_1: ×→ (resp. s_2: ×→) be the projection onto the first (resp. second) factor. An application of proper base change along the central cartesian square in diagram (<ref>) and the projection formula gives a natural (in ) isomorphism:
∘(K)=s_2* (s_1^*K ⊗_Λπ_*Λ[δ_+] )[In what follows we set δ_+:=d(n-d+1).].
Let Δ: ↪× denote the diagonal embedding, let U be the complement of the diagonal embedding, and let j U ↪× be the corresponding open immersion. One has the resulting diagram with cartesian squares:
Q [r]^-i_Q[d]^p_1 Q ×_Y Q [d]^π W [l]_-j_W[d]^π_U
[r]^-Δ × U [l]_-j.
We note that π_U is a Grassmann bundle with fibers (d-2,n-2). Consider the natural closed immersion Q ×_Y Q → Q ×, which on closed points maps (x,y,L) to (x,L,y). Here x,y are closed points of and L ⊂ is a d-plane containing them. The above commutative diagram factors as:
Q [r]^-i_Q[d]^Id Q ×_Y Q [d] W [l]_-j_U[d]
Q [r] [d] Q ×[d]^π̃ V [l] [d]^π̃_U
[r]^-Δ × U [l]_-j,
where all the squares are Cartesian.
Note that π̃ is a Grassmannian bundle with fibers (d-1,n-1) and is identity along the second projection. Let Z:= V ∖ W=Q ×∖ Q ×_Y Q, and π_Z: Z → U denote the resulting morphism.
We have an exact triangle on U
π_Z!Λ[r] π̃_U*Λ[r] π_U*Λ[r]^+1 .
Since π_U is a Grassmannian bundle, Lemma <ref> implies that π_U*Λ is formal[A sheaf is said to be formal if it is isomorphic to a shifted direct sum of its cohomology sheaves] and its cohomology sheaves are locally constant. Since U is simply connected <cit.>, they are in fact constant. Let _d-2,n-2:=⊕_i M^i_d-2,n-2[-i][For any Λ-module M, by M we mean the constant local system on × with values in M.], here M_d-2,n-2^i:=H^0(U,R^iπ_U*Λ). The restriction of _d-2,n-2 to U is isomorphic to π_U*Λ[The choice of _d-2,n-2 is not unique in as much as the choice of the decomposition in Lemma <ref>, but this non-uniqueness does not play a role in what follows.]. We also denote by _d-1,n-1:=π̃_*Λ. We have exact triangles,
j_!π_Z!Λ[r] π̃_* Λ[r] π_* Λ[r]^+1 ,
_d-1,n-1⊗ j_!Λ[r] _d-1,n-1[r] _d-1,n-1⊗Δ_* Λ[r]^-+1
and
_d-2,n-2⊗ j_!Λ[r] _d-2,n-2[r] _d-2,n-2⊗Δ_* Λ[r]^-+1
in ^b_c(×). Now note that for any sheaf K on and any constant sheaf (i.e. the cohomology sheaves are constant) L on ×, the sheaf s_2*(s_1^*K ⊗ L) is also constant. Thus combining triangles (<ref>)-(<ref>) and Equation (<ref>) we get a functorial (in K) exact triangle in the localized category ^b_c()_T,
∘(K) [r] K⊗Δ^*_d-1,n-1[δ_+] ^-ϕ[r] K⊗Δ^*_d-2,n-2[δ_+][r]^-+1 .
* For any perverse sheaf K on , there exists a natural isomorphism ^i ((K)) ≃^i(^0(K)) in () (and hence in D^b_c(,Λ)_T).
* For any perverse sheaf K on , there exists functorial (in K) isomorphisms in () (and hence in ())
^i( K⊗Δ^*_d-1,n-1[δ_+]) ≃ K ⊗Δ^*H^i+δ_+(_d-1,n-1)
and
^i( K⊗Δ^*_d-2,n-2[δ_+]) ≃ K ⊗Δ^*H^i+δ_+(_d-2,n-2).
* For i=δ-1,δ, the perverse sheaves ^i( K⊗Δ^*_d-2,n-2[δ_+]) vanish. Also ^δ-1( K⊗Δ^*_d-1,n-1[δ_+]) vanishes. Moreover when n-d>1, ^δ-2( K⊗Δ^*_d-2,n-2[δ_+]) is also zero.
* For any perverse sheaf K on , there exists a natural (in K) isomorphism in () (and hence in ()), ^δ( K⊗Δ^*_d-1,n-1[δ_+])≃ K(-d(n-d)).
Claim (a) is an immediate consequence of Theorem <ref>. Claims (b) follows from the formality of _d-1,n-1 and _d-2,n-2 and the fact that their cohomology sheaves are local systems.
For claim (c), using (b) it suffices to prove that Δ^*H^i+δ_+(_d-2,n-2) vanishes for δ-2 ≤ i ≤δ, and that Δ^*H^δ_++δ-1(_d-1,n-1)=0. In either case note that the cohomology sheaves of _d-2,n-2 and _d-1,n-1 are constant local systems and hence by their definitions it suffices to show that R^i+δ_+π_U*Λ for δ-2 ≤ i ≤δ and R^δ_++δ-1π̃_*Λ vanish. But these follow immediately from the fact that π_U is a (d-2,n-2) bundle[We require n-d>1, to ensure that dim((d-2,n-2))<d(n-d)-1.] and that π̃ is a (d-1,n-1) bundle.
For claim (d) arguing as above we conclude that ^δ( K⊗Δ^*_d-1,n-1[δ_+])≃ K ⊗Δ^*R^2d(n-d)π̃_*Λ≃ K(-d(n-d)).
Combining claims (a)-(d) above shows that there exists a natural isomorphism
^δ(d(n-d))∘^0(K) ≃ K
in (), and therefore complete the Proof of Theorem <ref> (2). It is also easy to see this map is the co-unit of the adjunction in Corollary <ref>. Finally, combining Lemma <ref> and Corollary <ref> we obtain the following.
The unit of the adjunction K →^-δ(d)∘^0(K) is an isomorphism in ().
We also have the following corollary of the method of the proof.
We have ^i_()(K_1,K_2) ≃^i_(Y)(^0(K_1),^0(K_2)) for i=0,1.
The isomorphism for i=0 is an immediate consequence of Theorem <ref>, (2) and the adjunction between ^δ(d) and ^0 (Corollary <ref>). We may now assume that n-d>1, else the result follows from the fact that ^0 induces an equivalence between () and () from from Theorem <ref>, (1) and (2).
The triangle (<ref>) and Claim <ref>, (b) and (c) above imply that for K ∈(),
^-1_T([δ] ∘ K(d(n-d))) ≃ 0.
Since [δ](d(n-d)) is right t-exact and is exact, this implies that
^p_Tτ_≥ -1[δ] ∘ K(d(n-d)) ≃^δ(d(n-d))∘^0(K),
which by Theorem <ref>, (2) is isomorphic to K under the co-unit of adjunction.
We also have
^1_(Y)(^0(K_1),^0(K_2)) ≃Hom_^b_c()_T([δ] ∘ K_1(d),K_2[1])
and
Hom_^b_c()_T([δ] ∘ K_1(d),K_2[1]) ≃_^b_c()_T(^p_Tτ_≥ -1[δ] ∘ K_1(d(n-d)),K_2[1]).
The first equality being adjunction and the second since K_1 and K_2 are perverse, [δ](d(n-d)) is right t-exact and is exact. Combining this with (<ref>) gives the necessary equality.
§.§ Proof of Theorem <ref>, (3)
Thanks to <ref>, (2) and Corollary <ref>, it suffices to show that the simple objects in (Y)_Rad are in the image of ^0. This follows from Proposition <ref>.
Example <ref> naturally leads to the following question which we have been unable to answer:
Does there exist a perverse sheaf on Y with singular support inside p_2∘p_1^∘^* whose image is not in (Y)_Rad, and hence the perverse sheaf is not in the image of the Radon transform?
Note that the answer to the above question is negative in characteristic 0 (see Section <ref>, (2)) or when d=n-1.
plain
|
http://arxiv.org/abs/2307.04620v2 | 20230710150757 | Surface magnon spectra of nodal loop semimetals | [
"Assem Alassaf",
"János Koltai",
"László Oroszlány"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"cond-mat.other",
"cond-mat.str-el"
] |
Department of Physics of Complex Systems, ELTE Eötvös Loránd University, Pázmány Péter sétány 1/A, 1117 Budapest, Hungary
[email protected]
Department of Biological Physics, ELTE Eötvös Loránd University, Pázmány Péter sétány 1/A, 1117 Budapest, Hungary
Department of Physics of Complex Systems, ELTE Eötvös Loránd University, Pázmány Péter sétány 1/A, 1117 Budapest, Hungary; MTA-BME Lendület Topology and Correlation Research Group, Budafoki út 8., H-1111 Budapest, Hungary
In this paper we establish a connection between the bulk topological structure and the magnetic properties of drumhead surface states of nodal loop semimetals. We identify the magnetic characteristics of the surface states and compute the system's magnon spectrum by treating electron-electron interactions on a mean-field level. We draw attention to a subtle connection between a Lifshitz-like transition of the surface states driven by mechanical distortions and the magnetic characteristics of the system. Our findings may be experimentally verified e.g. by spin polarized electron energy loss spectroscopy of nodal semimetal surfaces.
Surface magnon spectra of nodal loop semimetals
László Oroszlány
August 12, 2023
===============================================
§ INTRODUCTION
Due to their unique electronic properties and potential applications in numerous fields, topological materials have attracted significant attention <cit.>. These materials possess nontrivial topological properties, that in some cases need be protected by symmetries, resulting in the existence of robust surface or edge states. Topological semimetals are a class of topological materials that have been extensively studied in recent years <cit.>. Weyl and nodal semimetals are two types of topological semimetals that possess distinct surface states. Weyl semimetals are distinguished by the presence of Weyl nodes in the bulk band structure, resulting in Fermi arcs on the surface <cit.>. These Fermi arcs connect the Weyl node projections and exhibit a variety of fascinating transport properties. In contrast, the drumhead states on the surfaces of nodal semimetals are dispersionless states associated to the surface projection of the nodal line structure. Due to their small kinetic energy these drumhead states are susceptible to interactions and thus they can be an ideal platform for superconductivity<cit.> or emergent surface magnetism <cit.>.
Rhombohedral graphite is a prime example of such a material whose interaction induced magnetic properties have already been studied theoretically <cit.> and observed experimentally <cit.>.
In this paper we investigate, through a simple model, the surface magnon spectrum of nodal loop semimetals. In the next section we introduce our model and describe the connection of the bulk nodal loop and drumhead surface states. Treating electron-electron interaction on a mean field level we obtain the magnetic properties of the surface states. Mapping to an isotropic Heisenberg spin model, we calculate the magnon spectrum of the system. We highlight a nuanced connection between the connectivity of the topological flat band and the magnon energies. Our findings should be relevant for experimental characterization of topological flat bands arising in nodal semimetals, specially when the flat bands extend over a considerable portion of the projected Brillouin zone, for example as those in Ca_3P_2 <cit.>.
§ THE MODEL
In this section we introduce the investigated model and describe the real space structure and momentum space spectrum. The presence of a nodal loop, which is a closed curve in momentum space, distinguishes our model. As we show, the shape of the nodal loop and the flat surface states stabilized by its presence can be controlled by a parameter which corresponds to mechanical distortion in an experimental setting.
§.§ Real space structure
We consider a three dimensional cubic system, spanned by the lattice vectors 𝐚_i with two sublattices (A and B). The real space structure is depicted in Fig. <ref> (a). We take a single spinfull orbital degree of freedom on each site into account. Electrons are allowed to hop from one site to the other without breaking of the sublattice symmetry characterized by the real space Hamiltonian:
Ĥ_0 = ∑_𝐫,s δξ t â_𝐫,s^†b̂_𝐫,s + t â_𝐫,s^†b̂_𝐫+𝐚_1,s
+ t â_𝐫,s^†b̂_𝐫+𝐚_2,s + 2ξ t â_𝐫,s^†b̂_𝐫+𝐚_3,s +h.c. ,
where 𝐫 represents a unit cell of the system, while s is the spin degree of freedom. The annihilation operators â_𝐫,s and b̂_𝐫,s act on the appropriate sublattice and spin degree of freedom. The hopping amplitude t controls the strength of electron movement between neighboring lattice sites and serves as the unit of energy for our model. The sublattice symmetry is the fundamental symmetry of the system which allows for the emergence of the nodal loop.
There are two more important dimensionless parameters in the considered system. The parameter δ serves as an internal parameter which mimics experimentally hard to control properties of the system, such as particular matrix elements of the Hamiltonian related to hopping from one orbital to the other, while ξ, multiplying all hopping amplitudes in the z - direction, captures effects of applying mechanical pressure on the system. As we shell see below, both of these parameters have a significant impact on the electronic structure and magnetic properties of the system, as they both control the shape of the nodal loop and the associated drumhead surface states.
§.§ Momentum space structure
As the investigated system is cubic the corresponding Brillouin zone spanned by reciprocal lattice vectors 𝐛_i will also be cubic as depicted in Fig. <ref>(b). As we will connect the topological properties of the bulk to the surface magnetic properties of a slab with finite thickness, it is instructive to introduce the projected Brillouin zone with its appropriate high symmetry points, as shown in the figure, too.
In order to elucidate the momentum space structure defined by the kinetic Hamiltonian (<ref>)
we introduce Fourier transformed operators as
â_𝐤,s = ∑_𝐫e^i𝐤𝐫â_𝐫,s , b̂_𝐤,s = ∑_𝐫e^i𝐤𝐫b̂_𝐫,s,
where 𝐤 is a wavevector indexing states in the three dimensional Brillouin zone. With these we can recast (<ref>) as
Ĥ_0 = ∑_𝐤,s[ â_𝐤,s^† b̂_𝐤,s^† ]ℋ(𝐤)
[ â_𝐤,s; b̂_𝐤,s, ]
where we introduce the matrix ℋ(𝐤) as
ℋ(𝐤) = [δ t_z -2∑_i = (x,y,z)t_i cosk_i]σ_x +
2 t_z sink_zσ_y
=𝐝_δ,ξ(𝐤)·σ,
with t_x,y = t, t_z = ξ t and σ_x,y are Pauli matrices acting on the sub-lattice space.
The absence of σ_z from the above expression is the fingerprint of sublattice symmetry of the model. Three dimensional Hamiltonians with sublattice symmetry can be characterized by winding number <cit.> associated to the 𝐝_δ,xi(𝐤) vector for specific paths in momentum space. The system for a given value of k_x and k_y mimics the behaviour of the SSH model <cit.>. We calculate this winding number along k_z as we cross the Brillouin zone. For a given value of k_x, k_y, δ and ξ the winding number is evaluated as
ν(k_x,k_y,δ,ξ)=
1 |C_δ,ξ(k_x,k_y)/2ξ t|<1
0 |C_δ,ξ(k_x,k_y)/2ξ t|>1
where we introduce the shorthand C_δ,ξ(k_x,k_y)=δξ t -2 t cosk_x-2 t cosk_y.
The winding number, which is a bulk property, signals the presence or absence of topological drumhead states for slabs. This is a manifestation of the bulk boundary correspondence <cit.>.
If the winding number is nonzero for a given set of bulk parameters δ and ξ and wavevector components k_x and k_y then in a slab geometry there will be a zero energy surface state present at the corresponding wavevector.
The geometry of the nodal loop, the map of winding number and the spectrum of a slab of a finite thickness can be observed for different values of δ but fixed values of ξ in Fig. <ref>. while in Fig. <ref>. the same is depicted but for fixed values of δ and changing ξ.
Let us discuss the evolution of the nodal loop and the drumhead states associated with it as the function of the parameters δ and ξ!
First, focusing on Fig. <ref>. that is keeping ξ=1.0, we can observe that, as one decreases δ, a nodal loop first appears at the Γ point of the bulk Brillouin zone, then it grows in size. At δ=2.0 two drastic changes occur. First, the nodal loop around the Γ point is enlarged to a point where it coalesces with nodal loops from the neighboring Brillouin zone effectively transforming itself from a loop around Γ to a loop around M. Second, an additional nodal loop is germinated at the Z point of the bulk Brillouin zone, due to a band crossing. The appearance and evolution of the nodal loops leave an impression on the winding number maps as well. For larger values of δ where only a single loop is present, the region with ν=1 is a simply connected region in the shadow of the nodal loop. For δ<2.0 however, the appearance of the second loop and the coalescence of the original loop causes a drastic change in the connectivity of the region with a finite winding number, changing a simply connected region into a multiply connected one. Let us denote this type of transition as a connectivity shift. This transition is similar to a Lifshitz transition whereby the topology of the Fermi surface changes.
However, in contrast to the case of other systems with a two-dimensional Brillouin zone, for instance, bilayer graphene <cit.>, in our special case the Fermi-surface is also a two-dimensional object.
As δ is decreased even further to δ=0.0 the area Ω_0 of the region with ν=1 reaches a maximum. Let us introduce the ratio r of this area to the total area of the projected Brillouin zone Ω_BZ as
r = Ω_0/Ω_BZ.
As expected, due to the bulk boundary correspondence of topological systems, finite winding numbers herald non-dispersing zero energy surface states. As one can observe in Fig. <ref>.(f)-(j) where the spectrum of a slab with finite thickness is depicted, the region corresponding to ν=1 indeed harbors drumhead surface states. The spatial localization of these states follows from their analogy with the SSH model <cit.>.
Turning now our attention to the parameter ξ and to Fig. <ref>. we can see that for a fixed value of δ the parameter ξ, which mimics mechanical distortions, can also be used to change the connectivity of the flat portion of the surface localized zero energy states. As one decreases ξ a band crossing can be engineered at the Z point, introducing again a second nodal loop, and thus transforming a simply connected disk like region with ν=1 into an annulus like region. This thus again leads to a connectivity shift.
§.§ Interactions
In the previous sections, we showed that the presented model exhibits drumhead surface states. For these states, which occupy a considerable portion of the projected Brillouin zone, the kinetic energy vanishes. Interactions between charge carriers thus undoubtedly will have a major role in influencing their behavior. The simplest of consequences of interactions might lead to the formation of an ordered magnetic pattern on the surface of the system. This emergent magnetism parallels that of the edge magnetization of zigzag graphene nanoribbons already observed experimentally <cit.>.
We take interactions into account through a Hubbard term, thus the full Hamiltonian Ĥ for the electronic degrees of freedom is cast in the form
Ĥ=Ĥ_0 + U ∑_in̂_i ,↑n̂_i, ↓,
where n̂_i,s = ĉ^†_i,sĉ_i,s with ĉ_i,s = â_𝐫_i,s, b̂_𝐫_i,s. In the present work, we shall focus on the case of a half filled system, thus, in all calculations, the Fermi level E_F is set to guarantee this condition. We have to stress here that in order for magnetism to arise the system needs to be in the vicinity of half filling otherwise the spin polarization of the surface states vanishes, this behavior is expected for nodal line semimetals and it was already observed in rhombohedral graphene <cit.>. However we also note that all mechanisms which make the surface states dispersive, by enhancing its kinetic energy, also will extend the range of the chemical potential at which magnetism can be stabilized.
In order to further proceed, we analyse the system defined by (<ref>) on a mean-field level <cit.>. That is, we obtain an effective spin dependent single particle description of the system after a self-consistent procedure. Thus instead of the interacting Hamiltonian (<ref>) we work with the mean-field Hamiltonian Ĥ^s_MF ({n_i,↑,n_i,↓} ) for spin channel s which depends explicitly on the self-consistently obtained occupation numbers n_i,s at each site. The results of such a mean-field calculation can be observed in Fig. <ref>. (a), where the spectrum of a slab with finite thickness is presented.
The impact of interactions is the visible splitting of the zero energy flat band. The splitting is due to the local difference of the occupation of the two spin species on the surfaces of the system. The magnetization m_i on site i is obtained as
m_i = (n_i ,↑ - n_i, ↓) μ_B,
where the occupation numbers n_i,s are the expectation value of n̂_i,s in the ground state for site i and spin s while μ_B is the Bohr magneton.
Fig. <ref>. (b) shows the magnetization for each site in the cross section of a slab of finite thickness. One can observe that the sites on the very top and bottom carry a considerable portion of the overall magnetization. Magnetization drops off exponentially towards the bulk of the system with neighbouring layers exhibiting opposite magnetization.
For moderate system thickness where there is still some overlap between the states localized on the two opposing surfaces of the system, an antiferromagnetic configuration is energetically more favorable where the magnetization of the top layer is reversed as compared to that of the bottom layer, as can be observed in Fig. <ref> (b). In these situations the ground state of the system possesses an overall spectral gap as can be also seen in Fig. <ref> (a).
For wide enough slabs though, the difference in ground state energy of the parallel and anti-parallel alignment of the magnetization of the opposing surfaces vanishes as the two surfaces effectively decouple from each other.
§ SURFACE MAGNONS
In this section, we are going to analyze the magnetic characteristics of the topmost surface sites of our model. This layer of sites is characterized at zero temperature by an ordered ferromagnetic spin configuration. We start by mapping the localized magnetic moments of the surface, with magnitude m, to that of an isotropic Heisenberg model. The mapping will allow us to find the surface magnon spectrum of the system. From the magnon spectrum, we extract experimentally accessible quantities such as the spin wave stiffness D and the effective exchange constant J(0). We finish this section by discussing how these quantities depend on the parameters of the model. We shall concentrate on possible observable fingerprints of the connectivity shift discussed in the previous sections.
The classical Heisenberg model describes coupled classical magnetic moments at site i with an orientation 𝐞_i and coupling constants J_i j through the classical Hamiltonian
h = -1/2∑_i, j J_i j𝐞_i𝐞_j.
For tight binding like electronic systems, with a single spinfull orbital on each site, where interactions are taken into account through a Hubbard term with interaction strength U, on the mean-field level, the coupling constants appearing in the above expression can be cast into the rather simple form <cit.>
J_i j= 2/π (mU/μ_B)^2 ∑_i ≠ j∫_-∞^E_Fd E Im[ G^↑_i j(E) G^↓_ j i(E)].
In this expression G^s_ij(ε) are the matrix elements of the Green's function Ĝ^s(ε) for spin channel s and between surface sites i and j which in turn are obtained from the mean-field Hamiltonian Ĥ^s_MF as
Ĝ^s(E)=lim_η→ 0 ((E+iη) Î-Ĥ^s_MF)^-1.
The Fourier transform of the coupling constants, J(𝐪), can be cast in terms of an integral over the projected Brillouin zone for each wave vector 𝐪 as
J(𝐪)=∑_j ≠ 0 e^i 𝐪𝐑_j J_0 j
= 2/π (mU/μ_B)^2 Im∫_-∞^E_Fdεℐ_𝐪 (E )
with
ℐ_𝐪 (E ) = (∑_k𝒢^↑_00(E, 𝐤) 𝒢^↓_00(E, 𝐤+𝐪) ) - G_00^↑(E) G_00^↓(E).
Here 𝒢^s_00(E, 𝐤) is the surface component of the momentum dependent Green's function for an infinite slab geometry of finite thickness at momentum 𝐤 and spin component s.
The coupling constants can be used to define a temperature scale analogous to the mean-field Curie temperature as J(0)/3k_B. Thus we shall use J(0), the effective exchange parameter <cit.>, as a key characteristic property as well.
The dynamics of spin fluctuations is captured by the dispersion relation of magnons, which in turn, for a ferromagnetic reference state, is given by
ε(𝐪)=2 μ_B/m(J(0)-J(𝐪)).
This spectrum can be measured for instance by spin polarized electron energy loss spectroscopy<cit.>.
For ferromagnetic systems the curvature D of the magnon spectrum at 𝐪=0 is again an important attribute which is more commonly referred to as spin wave stiffness. That is
ε(𝐪)|_𝐪≈0 = D q ^2.
In the following we present and discuss results for the quantities mentioned above. We put an emphasis on how the energetics of surface magnons are impacted by the two model parameters δ and ξ particularly around a connectivity shift of the surface flat band.
Finite size scaling shows that as one increases the number of layers N towards the macroscopic limit the identified signatures of the connectivity shift presented below will manifest precisely at the critical values of parameters, even for weak interaction strengths. For stronger interactions the fingerprints of the transition will occur already for a moderate number of layers. In the calculations shown we considered a slab of thickness N=20 layers and an interaction strength of U/t=1.0 which proved to be a pragmatic choice in order to illustrate our main message.
As we did in previous sections we start our analysis by focusing on the parameter δ and keeping ξ=1.0 that is we consider a system in the absence of mechanical distortions. The magnon spectrum around a high symmetry path of the projected Brillouin zone for various values of δ is depicted in Fig. <ref>. As one can deduce from the graph reducing the value of δ, increases the energy of magnons around the Γ point. A curious observation can be also made regarding the spectrum for δ=0.0, namely that it vanishes not just at Γ but also at M. This property, which would point towards the instability of the ferromagnetic phase in general, can be explained in this particular case. In this instance the absence of the hopping terms proportional to δ from the kinetic term Ĥ_0 means that the system falls apart in to two interlocked but decoupled subsystems, which can be oriented parallel or anti-parallel with respect to each other without any energy cost.
In order to further elucidate important characteristic features of the obtained magnon spectrum we plot key properties as a function of δ in Fig. <ref>. We comment first on the evolution of r depicted in subfigure (a). As the nodal loop enlarges with decreasing δ the drum-head surface states occupy more and more area from the projected Brillouin zone. However decreasing δ beyond the connectivity shift at δ=2.0 the growth of the ratio r, depicted by orange dashed line in the figure, suffers a discontinuity. A qualitative observation regarding the connectivity shift can be also made based on the evolution of the magnon energies at the high symmetry points shown in subfigure (b). A maximum in the vicinity of the connectivity shift at the M point while a local minimum at the M point is present. Signatures of the connectivity shift are also present in the magnetization m the effective exchange coupling J(0) and in the stiffness D visualized in subfigures (c),(d) and (e) respectively. Although somewhat hard to discern these directly they are more readily visible through their derivatives with respect to δ. The derivative of the magnetization ∂_δ m jumps, the derivative of the effective coupling ∂_δ J(0) shows a local maximum while the derivative of the stiffness ∂_δ D has a local minimum in the vicinity of the connectivity shift at δ=2.0.
In an experimental setting the parameter δ is typically hard to control, ξ on the other hand is directly linked to a uniaxial distortion of the sample in the z direction. As we discussed previously a connectivity shift occurs for δ=3.0 if we decrease ξ below the critical 0.8 value, thus examining the behaviour of the above detailed characteristic features for this case as well might highlight experimentally observable fingerprints of this transition.
In Fig. <ref>. the magnon dispersion relation is depicted for distinct values of ξ above, below and exactly at the connectivity shift. In the panels of Fig. <ref>. the detailed ξ dependence of the characteristic magnon spectral features is collected. The discontinuity of the evolution of the ratio r at the connectivity shift is evident as in this case r peaks at the transition point. The magnon energies at the high symmetry M and X point as well as the magnetisation and the effective exchange coupling show a local maximum in the vicinity of the connectivity shift, while in the evolution of the stiffness a considerable decrease in the slope is observable as ξ increases past the transition point.
In this case it will be also insightful to evaluate the derivatives, now with respect of ξ. In all the characteristic properties there is a clear transition happening at the connectivity shift in the derivatives. The derivative of m and J(0) both drop sharply while the ∂_ξ D jumps abruptly at the transition point. We note that the oscillations present in this quantity at small ξ values are due to numerical limitations, and as such they should be considered as a computational artefact.
§ SUMMARY
In conclusion, we investigated the magnons associated to the drumhead surface states in a simple model of a nodal loop semimetal.
The model without interactions exhibits topological flat bands whose shape, and crucially their connectivity, can be controlled by mechanical distortions.
Including interactions on a mean-field level we show that magnetization on the surface is stabilized.
Employing a standard Green's function based technique we obtained the dispersion relation of surface magnons.
Determining key, experimentally accessible characteristics of the magnon spectrum, such as the magnetization, the effective exchange coupling and the spin wave stiffness, we show that the Lifshitz-like transition of the electronic states can in principle be observed through the magnetic properties of the surface.
On the one hand we emphasise that our presented phenomenological observations would greatly benefit from future analytic calculations which may shed light to the intricate interplay of topology, interactions and magnetism in this system.
On the other hand our calculations hopefully will encourage experimental exploration of magnetism on the surface of nodal loop semimetals. For instance Ca_3P_2 <cit.> with a relative large r ratio might be an excellent candidate for future investigations.
§ ACKNOWLEDGEMENT
The authors wish to express their gratitude to Edward McCann, Rahul Nandkishore, Jaime Ferrer, Amador García Fuente, Gabriel Martinez Carracedo, László Szunyogh and László Udvardi, for valuable discussions and their comments regarding the present work.
This research was supported by the Ministry of Culture and Innovation and the National Research, Development and Innovation Office within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004) and by NKFIH Grants No. K131938, K134437 and K142179. A.A. greatly acknowledges the support from Stipendium Hungaricum No. 249316.
L.O. also acknowledges support of the National Research, Development and Innovation (NRDI) Office of Hungary and the Hungarian Academy of Sciences through the Bolyai and Bolyai+ scholarships.
|
http://arxiv.org/abs/2307.04456v1 | 20230710101101 | Invex Programs: First Order Algorithms and Their Convergence | [
"Adarsh Barik",
"Suvrit Sra",
"Jean Honorio"
] | math.OC | [
"math.OC",
"cs.LG"
] |
Analyzing the Evolution of Inter-package Dependencies in Operating Systems: A Case Study of Ubuntu
Victor Prokhorenko1,2 Chadni Islam3 Muhammad Ali Babar1,2
August 12, 2023
==================================================================================================
Invex programs are a special kind of non-convex problems which attain global minima at every stationary point. While classical first-order gradient descent methods can solve them, they converge very slowly. In this paper, we propose new first-order algorithms to solve the general class of invex problems. We identify sufficient conditions for convergence of our algorithms and provide rates of convergence. Furthermore, we go beyond unconstrained problems and provide a novel projected gradient method for constrained invex programs with convergence rate guarantees. We compare and contrast our results with existing first-order algorithms for a variety of unconstrained and constrained invex problems. To the best of our knowledge, our proposed algorithm is the first algorithm to solve constrained invex programs.
§ INTRODUCTION
Many learning problems are modeled as optimization problems. With the explosion in deep learning, many of these problems are modeled as non-convex optimization problems — either by using non-convex objective functions or by the addition of non-convex constraints. While well-studied algorithms with fast convergence guarantees are available for convex problems, such mathematical tools are more limited for non-convex problems. In fact, the general class of non-convex optimization problems is known to be NP-hard <cit.>. Coming up with global certificates of optimality is the major difficulty in solving non-convex problems. In this paper, we take the first steps towards solving a special class of non-convex problems, called invex problems, which attain global minima at every stationary point <cit.>. Invex problems are tractable in the sense that we can use local certificates of optimality to establish the global optimality conditions.
Related work.
First-order gradient descent methods are the most well-known algorithms to solve convex optimization problems. While they can also solve invex optimization problems under certain conditions, they can be really slow in their convergence due to their inability to use any underlying `invex' geometry. <cit.> have studied the minimization of a special class of unconstrained invex functions – called geodesically convex functions. They provide convergence rate guarantees for their algorithms assuming upper bounds on the sectional curvature of the manifold. Such algorithms have also been studied by <cit.> in their work on optimization methods on Riemannian manifolds, albeit with a focus on asymptotic convergence. The simplest instance of geodesically convex optimization is more commonly known as geometric programming <cit.>. The algorithms solving geodesically convex problems use topological properties of geodesic curves for their convergence. Often, finding the underlying geodesic curves and characterizing the manifold prove to be the bottleneck for solving such problems. These difficulties extend naturally to the general class of invex problems where topological properties are difficult to establish. In this work, while we do connect properties of invex functions with the topology of the domain, we also develop algebraic methods for implementing our proposed algorithm. Our focus in this work is to develop first-order methods with provable global convergence rates for a broader class of invex problems. Our method reduces to classical gradient descent (Riemannian gradient descent <cit.>) if the underlying function is convex (geodesically convex). Many optimization problems can be classified as invex problems by using the simple characterization by <cit.>. We provide some such examples that have been studied in recent years as motivation. <cit.> showed that geodesically convex functions are invex. This means that problems such as matrix Karcher mean problem <cit.>, power control <cit.>, optimal doping profile <cit.> and non-convex matrix factorization <cit.> are invex. Any function which satisfies PL-inequality <cit.> is an invex function. This implies that averaged-out non-convex functions <cit.> are also invex. Similarly, quasar-convex functions <cit.> can also be shown to be invex. Recent studies have shown that many machine learning problems such as learning output kernels <cit.>, multiple tasks learning <cit.>, minimum distance lasso <cit.>, reinforcement learning with general utilities <cit.>, fair sparse regression <cit.>, sparse mixed linear regression <cit.>, imaging with invex regularizers <cit.> and DAG learning <cit.> are also invex. Identifying a problem to be invex is relatively a simple task, whereas coming up with an efficient algorithm to solve such a problem by leveraging the invexity is quite challenging. Furthermore, convergence rate analysis of such algorithms becomes even more tedious. To the best of our knowledge, we are not aware of any provably convergent general algorithm to solve invex problems.
In this paper, we present first-order methods to solve invex problems with provable convergence rate guarantees under some natural technical conditions.
Summary of contributions
* We present a first-order gradient descent algorithm for invex problems (Algorithm <ref>). We demonstrate the feasibility of our update rule over a wide variety of examples (Section <ref>).
* As an extension, we propose a projected gradient descent algorithm for constrained invex problems (Algorithm <ref>). We show that our algorithm works for constrained geodesically convex programs in Hadamard manifolds (Section <ref>).
* We provide convergence rate guarantees (Table <ref>) for our proposed algorithms (Theorem <ref>, <ref>, <ref>, <ref>, <ref>) under varying degree of assumptions. We identify sufficient technical conditions needed for the convergence of our proposed algorithm.
* Finally, we show the applicability of our algorithms on both unconstrained <cit.> and constrained <cit.> machine learning problems in Section <ref>. We show that under the same initialization and hyperparameters, our algorithms outperform the standard gradient descent algorithms.
§ INVEXITY
In this section, we formally define the invex function and relate it with convexity along the curve. Consider a differentiable function ϕ(x) defined on a Riemannian manifold . Let ··_x be the inner product in the tangent space T_x of x induced by the Riemannian metric.
Let ϕ(x) be a differentiable function defined on . Let η be a vector valued function defined in × such that η(y, x)∇ϕ(x)_x is well-defined ∀ x, y ∈. Then, ϕ(x) is an η-invex function if
ϕ(y) - ϕ(x) ≥η(y, x)∇ϕ(x)_x, ∀ x, y ∈ .
If the manifold is ^n, then we get the standard definition of invex functions <cit.>. Convex functions are invex functions on ^n with η(y, x) = y - x. In that sense, invex functions are a generalization of convex functions. <cit.> proved the sufficient and necessary condition that any stationary point of the invex function is the global minima. It follows that (at least in ^n) any algorithm that converges to a stationary point, in principle, can solve unconstrained invex problems. However, convergence rate guarantees are not available for any such algorithms. Similarly, geodesically convex functions on the Riemannian manifold are η-invex with η(y, x) = ^-1_x(y) where ^-1_x is the inverse of the exponential map y = _x(v) for some v in the tangent space at point x ∈. This motivates to characterize invex functions by treating them as convex functions along a curve. More formally, we provide the following proposition from <cit.>.
A differentiable real function ϕ(x) defined on is η-invex if and only if for every x, y ∈, the real function g_x, y(t) = f(γ_x, y(t)) is convex on [0, 1] for some curve γ_x,y such that
γ_x, y(0) = x, γ_x, y(1) = y, γ̇_x, y(u) (t - u) = η(γ_x, y(t), γ_x, y(u)), ∀ t, u ∈ [0, 1] .
Proposition <ref> immediately provides a setting for η(y, x) in terms of underlying curve, i.e., η(y, x) = γ̇_x, y(0).
For convex functions, the underlying curve γ_x, y(t) = x + t (y - x). Similarly, for a geodesically convex function, the underlying curve γ_x, y(t) is the geodesic curve joining x and y. We notice, however, that finding the underlying curve for any given η-invex function may not be an easy task. We observe that proposition <ref> allows us to connect invexity of a function to a geometric property (underlying curves) of the domain of the function.
This leads us to define invex sets as a natural extension of convex sets.
A set ⊆ is called η-invex set if contains every curve γ_x, y of as defined in proposition <ref> whose endpoints x and y are in .
It is also possible to characterize invex sets using η(y, x) functions by using the relationship between γ_x, y and η(y, x) from equation (<ref>). Thus, we sometimes refer to the invex set as η-invex set with the assumption that η(y, x) is computed using γ_x, y.
We note that our definition is a slight departure from the definition of the invex set used in <cit.>. However, we find our definition more natural for our purpose.
Using definition <ref>, we can redefine invex functions on an invex set ⊆ as following:
Let ⊆ be an invex set. A real-valued differentiable function ϕ:→ is called invex if
ϕ(γ_x, y(t)) ≤ (1 - t) ϕ(x) + t ϕ(y), ∀ x, y ∈, ∀ t ∈ [0, 1]
Definitions <ref> and <ref> are connected with each other through the relationship between γ_x, y and η(y, x) in equation (<ref>).
In the next sections, we will build up our definition of invex sets to define invex programs.
§ INVEX PROGRAM
In this section, we will define the optimization problem that we are trying to solve.
Our optimization problem involves minimizing an η-invex function over an η-invex set. In the remaining paper, we would assume to be an η-invex set unless stated otherwise.
Let f: → be an η_1-invex function, and g_i: →, ∀ i ∈{ 1, ⋯, m } be η_2-invex functions, then the optimization problem
min_x ∈ f(x), such that g_i(x) ≤ 0 , ∀ i ∈{ 1, ⋯, m }
is called an invex program.
It is possible to include equality constraints in the program, but we opt for only inequality constraints for simplicity.
Before we begin to solve the optimization problem (<ref>), we will prove some technical results to understand the problem in a better way. First, we will show that the constraint set is indeed an η_2-invex set. We will do it in two parts.
Let ϕ:→ be an η-invex function, then ϕ(x) ≤ c is an η-invex set for any c ∈.
Next, we use Lemma <ref> to show that the constraint set is an η-invex set.
Let g_i:→, ∀ i ∈{ 1, ⋯, m } be η-invex functions, then the set = ∩_i=1^m _i is η-invex where _i = { x ∈ | g_i(x) ≤ 0 }.
Invex programs without any constraints are called unconstrained invex programs. In the next section, we propose a first-order method to solve invex programs.
§ NEW FIRST ORDER ALGORITHM
In this section, we develop first-order gradient descent methods for invex programs. We start with the unconstrained version of problem (<ref>) and then gradually build up our method for the constrained version.
§.§ Invex gradient descent method
The main task in our algorithm is to figure out a y ∈ for a given x ∈ and a direction v ∈ T_x such that η(y, x) = v. Such a y need not be unique and we are only interested in finding one y (of possibly many) that satisfies η(y, x) = v. We provide the following gradient descent algorithms to solve invex programs.
[t]0.45
[t]0.5
In Algorithm <ref>, T is the maximum number of iterations and α_k is the step size which depends upon the particular implementation of the algorithm. We will specify a particular choice of α_k in the convergence rate analysis of Algorithm <ref>. Without any information on underlying curve γ_x, y(t), the update step of Algorithm <ref>, i.e., finding a y ∈ such that η(y, x) = v is a problem-dependent task. Below we provide an array of examples to explain this observation.
[Convex case]
For convex problems, y = x + v.
[Geodesically convex case]
For geodesically convex problems, y = _x(v) where is exponential map as defined in <cit.>.
[PL inequality]
It is known that the functions satisfying PL-inequality are invex <cit.>. However, this characterization does not readily lead to a good η(y, x). We provide an η function in the following lemma which can be used in the update step.
Let f(x) be an L-smooth function that satisfies PL inequality for some μ > 0. Then it is η-invex for η(y, x) = 1/μ(∇ f(y) + L y - x /∇ f(x) ∇ f(x) ).
The proof of Lemma <ref> and further discussion is deferred to Appendix <ref>.
[Quasar Convex Functions]
<cit.> showed that quasar convexity implies invexity. However, they do not provide any η for the quasar convex functions. In the following lemma, we provide an η for quasar convex functions.
For any ν≥ 0, there exists a β∈ [0, 1] such that quasar convex functions are η-invex for η(y, x) = β/ν(1 - β) (y - x).
This leads to the update y = x + ν1 - β/β v. We provide the proof of Lemma <ref> in Appendix <ref>.
[Connection with Bregman divergence and Mirror descent]
Let B_ψ(y, x) be the Bregman divergence associated with a strongly convex and differentiable function ψ(x) such that B_ψ(y, x) = ψ(y) - ψ(x) - ∇ψ(x)y - x.
Let η(y, x) = ∇ B_ψ(y, x) = ∇ψ(y) - ∇ψ(x), i.e., η(y, x) is a conservative field and B_ψ(y, x) is its potential function. Then a typical mirror descent update <cit.> can be used to compute y, i.e., y = inf_u B_ψ(u, x) + α∇ f(x)u - x.
[Recent Invex Problems]
Some recently studied problems in invexity such as <cit.> and <cit.> are invex for a particular form of η(y, x). In particular, consider any point x ∈^n of the form x = [ x_1 x_2 ]^ where x_1 ∈^n_1, x_2 ∈^n_2 such that n_1 + n_2 = n. Then for any two x, y ∈^n, η(y, x) takes the form η(y, x) = [ y_1 - x_1 A(y_1, x_1) (y_2 - x_2) ]^
where A(y_1, x_1) ∈^n_2 × n_2 and A(y_1, x_1) ≻0, ∀ y_1, x_1 ∈^n_1. For such problems, update step in Algorithm <ref> becomes y_1 = x_1 + v, y_2 = x_2 + A(y_1, x_1)^-1 v.
[A generic approach using function inverse]
A generic approach to compute y such that η(y, x) = v is to treat η(y, x) = g(y) for a fixed x and then compute y = g^-1(v). This approach works as long as we have explicit closed-form expression for g^-1(v). For our purpose, we ignore the uniqueness of y = g^-1(v) and allow any y as long as g(y) = v.
§.§ Convergence of invex gradient descent method
We start the convergence analysis of Algorithm <ref> with the weakest set of assumptions, and then we gradually add stronger conditions to get a better convergence rate. Before we delve into our first result of convergence, we define a notion of smoothness in the invex manifold .
A differentiable function f:→ is called L-smooth on an η-invex set if ∀ x, y ∈
f(y) ≤ f(x) + η(y, x)∇ f(x)_x + L/2η(y, x) ^2,
where norm · is induced by the Riemannian metric at x.
Note that a function f need not be an invex function to be an L-smooth function. Our first convergence guarantee is for L-smooth functions.
Let f be a L-smooth function and f^* = min_x ∈ f(x) ≥ B for some B > - ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L), then lim_k →∞∇ f(x_k) = 0.
Theorem <ref> states that Algorithm <ref> converges to a stationary point even if the function is not invex. Our next task is to achieve a better convergence rate by adding the assumption that f is an invex function. However, to do that, we need to impose an extra condition on the choice of η(·, ·) which in turn imposes an extra condition on the geometry of . To make Algorithm <ref> amenable to rigorous convergence rate analysis, we impose a sufficient condition on the geometry of which is analogous to triangle inequality in Euclidean space.
[Triangle Inequality]
Let x, y, z ∈, then for some b, c > 0
η(y, z) ^2 ≤ η(x, z) ^2 + b η(y, x) ^2 - c η(y, x) η(z, x)_x .
The triangle inequality assumption is an assumption on the geometry of manifold . We also note that Euclidean spaces clearly satisfy Assumption <ref> by simply taking b=1 and c = 2. <cit.> showed that any Riemannian manifold with sectional curvature upper bounded by κ≤ 0 also satisfies Assumption <ref>. Now, we are ready to state our second convergence result.
Let f: → be an L-smooth η-invex function such that satisfies Assumption <ref>. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞ and η(x_0, x^*) ≤ M < ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L), then f(x_k) converges to f(x^*) at the rate (1/k).
We further improve convergence rate results by imposing even more conditions on function f. We define μ-strongly η-invex functions as a natural extension to μ-strongly convex functions as follows.
A differentiable function f:→ is called μ-strongly η-invex function for some μ > 0 if f(y) ≥ f(x) + η(y, x)∇ f(x)_x + μ/2η(y, x) ^2, where norm · is induced by the Riemannian metric at x.
We provide the following convergence results for the μ-strongly η-invex functions.
Let f:→ be an L-smooth μ-strongly η-invex function such that satisfies Assumption <ref>. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞, η(x_0, x^*) ≤ M < ∞ and η(y, x) _x^2 ≤ R η(x, y) _y^2, ∀ x, y ∈ for some R > 0. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, min (2/ R μ c , c/2bL)), then
η(x_k+1 , x^*) ^2 ≤(1 - c α R μ/2)^k+1 M^2 .
We have intentionally chosen to show convergence results for a constant step size of α_t for simplicity. It is not difficult to get better convergence rates by carefully choosing α_t. It is easy to verify that all our results hold for the convex case. They also extend nicely to all the results in <cit.> for geodesically convex case.
§.§ Projected invex gradient descent method
Now that we have shown convergence results for unconstrained invex programs. We can extend these results to constrained case by providing a projected invex gradient descent method. We first discuss projection on an invex set before providing the algorithm.
Let ⊆ be an η-invex set. We define the projection of x ∈ on as a retraction.
Let γ_x, y(t) be the curve connecting x ∈ to y ∈ such that γ_x, y(0) = x and γ_x, y(1) = y. Projection ρ_η(x) of x on is defined as ρ_η(x) = min_ y ∈η(y, x).
It is easy to see that for convex sets, projection reduces to finding y ∈ which is closest to x in Euclidean distance. Also, notice that if x ∈, then ρ_η(x) = x.
First, observe that in the invex program as defined in <ref> objective function f is η_1-invex while the constraint set is η_2-invex. Thus, we make update in two steps. The first step works in η_1-invex set and then it is projected back to η_2-invex set. The convergence rates of the invex gradient descent algorithm can be extended to the projected invex gradient descent algorithm under the following condition (Details in Appendix <ref>).
[Contraction]
Let x, y ∈ and ρ_η_2(x), ρ_η_2(y) are their projection on an η_2-invex set respectively. Then, η_1( ρ_η_2(y), ρ_η_2(x) ) _ρ_η_2(x)≤η_1( y, x ) _x.
Next, we will discuss the convergence of Algorithm <ref>.
§.§ Convergence of Projected Invex Gradient Descent Method
To guarantee convergence of Algorithm <ref>, we need to place extra technical conditions on the projection operator. In particular, the following condition suffices to ensure convergence.
[Contraction]
Let x, y ∈ and ρ_η_2(x), ρ_η_2(y) are their projection on an η_2-invex set respectively. Then, η_1( ρ_η_2(y), ρ_η_2(x) ) _ρ_η_2(x)≤η_1( y, x ) _x.
Next, we will show that once Assumption <ref> is satisfied by the projection operator, results from Theorems <ref> and <ref> extend nicely to Algorithm <ref>.
Let f: → be an L-smooth η_1-invex function such that satisfies Assumption <ref>. Let ⊆ be an η_2-invex set. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞ and η_1(x_0, x^*) ≤ M < ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L) and with projection operator satisfying Assumption <ref>, then f(x_k) converges to f(x^*) at the rate (1/k).
We have a similar result for μ-strongly η-invex functions.
Let f:→ be an L-smooth μ-strongly η_1-invex function such that satisfies Assumption <ref>. Let ⊆ be an η_2-invex set. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞, η_1(x_0, x^*) ≤ M < ∞ and η_1(y, x) _x^2 ≤ R η_1(x, y) _y^2, ∀ x, y ∈ for some R > 0. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, min (2/ R μ c , c/2bL)) and with projection operator satisfying Assumption <ref>, then
η_1(x_k+1 , x^*) ^2 ≤(1 - c α R μ/2)^k+1 M^2 .
Assumption <ref> clearly holds for convex objective functions on convex constraints and thus, it is a natural choice of assumption to impose on the general case of constrained invex programs. In fact, in the next subsection, we show that it also holds for geodesically convex programs.
§.§ Constrained geodesically convex problem
In recent literature, there has been a lot of focus on constrained geodesically convex problems (with both the objective function and constraints being geodesically convex) <cit.>. Our projected gradient algorithm <ref> works for constrained case of geodesically convex optimization problems with sectional curvature upper bounded by κ≤ 0. To that end, we can show that Assumption <ref> holds in this particular case.
Let ρ be the projection operator as defined in <ref> on a closed geodesically convex subset of a simply connected Riemannian manifold with sectional curvature upper bounded by κ≤ 0. Then the projection satisfies Assumption <ref>.
Thus, we can use Algorithm <ref> to solve constrained geodesically convex problems with sectional curvature upper bounded by κ≤ 0. This extends all the results from <cit.> to the constrained case and provides a novel method to solve constrained geodesically convex problems.
§ APPLICATIONS
In this section, we provide specific examples of invex programs to validate our theory. Our task is to provide a working η(y, x) for all the problems and explicitly construct the update step and projection step (if needed). Finally, we compare the performance of our algorithm with the gradient descent (or projected gradient descent) algorithm. The latter of which provides no convergence rate guarantees for invex problems. We chose to go with the vanilla implementation for both algorithms, i.e., without any performance enhancement tricks such as line search. This was done to ensure that the comparison remains fair as our algorithms can also be adapted to include such tricks to further boost its performance. However, that is not our focus in this work.
§.§ Log-determinant acyclicity characterization for DAGs
We start with an unconstrained invex program. <cit.> provided a novel characterization of the acyclicity of DAGs in their recent work. Their characterization employs a log-determinant function and is stated in Theorem 1 <cit.>. Let 𝒲 = ≜{ W ∈ℝ^d × d| s > r(W ∘ W) }. Their log-determinant acyclicity characterization of DAGs uses the function h(W) = - log (sI - W ∘ W) + d log s
where W ∈𝒲, I is identity matrix, r(·) denotes the spectral radius and A ∘ B denotes Hadamard product between two matrices. We take s to be 1 without loss of generality and thus h(W) = - log (I - W ∘ W). They show in Corollary 3 <cit.> that h(W) is an invex function. However, they do not provide any specific η for invexity. Next, we will provide a possible η for the problem but before that, we need to define Hadamard division of two same-sized matrices A, B ∈^d × d as (A ⊘ B)_ij = A_ij/B_ij when B_ij 0 and 0 otherwise. Now we are ready to state the following lemma.
The function h(W) = - log (I - W ∘ W), ∀ W ∈𝒲 is η-invex for η(U, W) = -1/2 ((I - W ∘ W) (log(I - U ∘ U) - log(I - W ∘ W))) ⊘ W.
We can use our proposed η to construct updates for Algorithm <ref>. Observe that for an stepsize α, update step in Algorithm <ref> is η(W_k+1, W_k) = - α∇ h(W_k). We take M = (I - W_k+1∘ W_k+1) and N = (I - W_k ∘ W_k) for clarity. Then the update step becomes -1/2 (N (log M - log N)) ⊘ W_k = - 2 α N^-∘ W_k and we get M = exp( log N + 4 α N^-1 ((N^-∘ W_k) ∘ W_k )) which provides the update step.
We used this update step to implement Algorithm <ref>. The performance of our algorithm was compared against the standard gradient descent algorithm. Both the algorithms were run with a random initialization of W which was kept the same for both algorithms. We found that the gradient descent algorithm failed to converge in several instances but our algorithm converged towards zero objective function value as predicted by <cit.> (See Figure <ref>).
§.§ Fair sparse regression
Our next example is a constrained invex program. <cit.> proposed a novel invex relaxation for fair sparse regression problem. In this problem, each data point is associated with one of the two groups, and the response variable is generated with a signed bias term based on the membership to the group. They use the following generative model: y_i = X_i^ w^* + γ z_i^* + e_i, ∀ i ∈{1, ⋯, n},
where e_i is a zero mean independent additive noise and z_i^* is the group membership. The task is to identify regression vector w^* ∈^d along with z_i^* for every data point. <cit.> proposed the following invex relaxation for this problem.
min_w, Z M(w)Z + λ_n w _1 such that (Z) = 1, Z ≽ 0
,
where
M(w) ≜[ 1/n(Xw - y)^ (Xw - y) + 1 γ/n(Xw - y)^; γ/n(Xw - y) (γ^2/n + 1) I ]
with X ∈^n × d being the data matrix and I being the identity matrix of appropriate dimension. They provide an η_1 for the objective function and it is obvious that constraints are convex (we ignore the dimension of the matrices for succinct representation). Thus,
η_1((w, Z), (w, Z)) = [ w - w; M(w)^-1 M(w) (Z - Z) ], η_2((w, Z), (w, Z)) = [ w - w; Z - Z ] .
We used these η functions to construct updates and projection for Algorithm <ref>. Let f(w, Z) = M(w)Z. Let ∇_w f(w, Z) = ∂M(w)Z/∂ w and ∇_Z f(w, Z) = M(w), then using the η functions and step-size α we write the following update steps for this problem:
w_t+1 = ∏_λ(w_t - α∇_w f(w_t, Z_t)), Z̅_t+1 = Z_t - α M(w_t+1)^-1M(w_t) ∇_Z f(w_t, Z_t) ,
where ∏_λ(·) is the projection operator which uses soft thresholding to deal with ℓ_1-regularization. We need to project Z̅_t+1 on constraints to get the final Z_t+1.
Z_t+1 = min_Z Z - Z̅_t+1_F^2 such that (Z) = 1, Z ≽ 0
We used update rules from equation (<ref>) and (<ref>) to implement Algorithm <ref>. We compared its performance against the projected gradient descent algorithm. The hyper-parameters (such as λ and α) and initial values of w and Z were kept the same across both algorithms. We report our results in Figure <ref>. We see that both algorithms perform in a similar manner. We expect this behavior as when w_t is close to w_t+1 the update rules are the same for both the algorithms.
§.§ Mixed linear regression
In mixed linear regression, measurements come from one of the two different linear regression models and the task is to identify two regression vectors and the model associated with each data point. Mathematically, each data point is generated as follows: y_i = z_i^* X_iβ_1^* + (1 - z_i^*) X_iβ_2^* + e_i, ∀ i ∈{1, ⋯, n } where β_1^* and β_2^* are d-dimensional vectors. <cit.> proposed an invex program to solve this problem. Let f(t, W, U) = ∑_i=1^n 1/2S_iW + U + 1/2 t_i S_iW - U, g(t, W, U) = (W) _1 and h(t, W, U) = (U) _1, where S_i = [ X_i; -y_i ][ X_i^ -y_i ]
and operator (.) vectorizes the matrix. Their invex formulation is given as:
min_t, W, U ∑_i=1^n f(t, W, U) + λ_1 g(t, W, U) + λ_2 h(t, W, U)
such that W ≽ 0, U ≽ 0, W_d+1, d+1 = 1, U_d+1, d+1 = 1, t _∞≤ 1
The constraints of the problem are clearly convex. <cit.> also provide an η_1 for the objective function, but it does not lend well to construct update rules required for Algorithm <ref>. We bypass this problem by showing that when W U then the objective function is invex for a different η_1. When W = U, we revert to the η provided by <cit.>. To that end, we prove the following lemma.
Assume that WU, then functions f(t, W, U) = ∑_i=1^n 1/2S_iW + U + 1/2 t_i S_iW - U, g(t, W, U) = (W) _1 and h(t, W, U) = (U) _1 are η_1-invex for
η_1((t, W, U), (t, W, U)) = [ τ∘ (t - t); W - W; U - U ] ,
where τ(W, U, W, U) ∈^n such that τ_i = S_iW - U/S_iW - U.
Now we are ready to construct update and projection rules.
Let ∇_t f(t, W, U)_i = 1/2S_iW - U, ∀ i={1, ⋯, n}, ∇_W f(t, W, U) = ∑_i=1^n t_i + 1/2 S_i and ∇_U f(t, W, U) = ∑_i=1^n 1 - t_i/2 S_i, then we propose the following update steps for step-size α:
W̅_k+1 = ∏_λ_1(W_k - α∇_W f(t_k, W_k, U_k)), U̅_k+1 = ∏_λ_2(U_k - α∇_W f(t_k, W_k, U_k))
t̅_k+1 = t_k - α∇_Z f(t_k, W_k, U_k) ⊘τ(W_k+1, U_k+1, W_k, U_k) ,
where ∏_λ(·) is the projection operator which uses soft thresholding to deal with ℓ_1-regularization. We use the following projection steps to get W_k+1, U_k+1 and t_k+1.
W_k+1 = min_W W - W̅_k+1_F^2
such that W_d+1, d+1 = 1, W ≽ 0
, U_k+1 = min_U U - U̅_k+1_F^2
such that U_d+1, d+1 = 1, U ≽ 0
t_k+1 = min_t t - t̅_k+1_2^2
such that t _∞≤ 1
We implemented Algorithm <ref> using update and projection rules from equation (<ref>) and equation (<ref>). Like before, we compared the performance of our algorithm with the projected gradient descent method with the same set of hyperparameters and initialization. We report our results in Figure <ref>. We see that our algorithm converges faster than the projected gradient descent algorithm.
§ CONCLUSION AND FUTURE WORK
In this work, we have taken the first steps towards providing algorithms to solve constrained and unconstrained invex programs within certain technical conditions. We show that our algorithm can be used to solve constrained geodesically convex optimization problems with provable convergence rate guarantees. We also show the applicability of our proposed algorithm in a variety of machine-learning applications. Our work employs some natural assumptions for mathematical analysis. But these are only sufficient conditions for convergence. As the future direction, it would be interesting to see if these assumptions can be relaxed without losing on the convergence rate guarantees. From an application point of view, it would also be interesting to come up with an explicit form of update rules and the projection operator from a given η for a large class of invex problems. Another direction of research could be to study the accelerated version of our algorithms. While already for the subclass of invex problems, namely, geodesically convex ones, it is known that without further assumptions/restrictions, global acceleration similar to the Euclidean Nesterov acceleration does not hold. However, it is a valuable question to explore conditions under which such an acceleration holds for our setting.
apalike
§ FUNCTIONS SATISFYING PL INEQUALITY
PL functions are a special class of (possibly nonconvex) functions that satisfy the following property:
∇ f(x) ^2 ≥μ (f(x) - f(x^*)) ,
<cit.> showed that if an L-smooth function satisfies PL inequality, then it can be shown that it achieves an exponential convergence rate. These functions are known to be invex. Here we provide a characterization of their invexity by providing an η which can be used to construct updates in Algorithm <ref>. To that end, we show the validity of Lemma <ref>.
Lemma <ref>
Let f(x) be an L-smooth function which satisfies PL inequality for some μ > 0. Then it is η-invex for the following η:
η(y, x) = 1/μ(∇ f(y) + L y - x /∇ f(x) ∇ f(x) )
Since f(x) follows PL inequality for some μ > 0, the following inequality holds <cit.> for all x in the domain D of f(x):
∇ f(x) ^2 ≥μ (f(x) - f(x^*)) ,
where x^* = min_x ∈ D f(x). Using Taylor series expansion of g(y) = ∇ f(y)^ v around x and then substituting for v = ∇ f(x), we can write
∇ f(x) ^2 = ∇ f(x)^∇ f(y) + ∇ f(x)^∇^2 f(z) (x - y)
for some z= y + t (x - y), t ∈ [0, 1]. It follows that
∇ f(x)∇ f(y) + ∇^2 f(z) (x - y) ≥μ (f(x) - f(x^*)) ≥μ (f(x) - f(y))
Given that f(x) is L-smooth and
max_ M _2 ≤ L u^ M v = L u v ,
we can write
∇ f(x)∇ f(y) + L y - x /∇ f(x) ∇ f(x) ≥μ (f(x) - f(x^*)) ≥μ (f(x) - f(y))
This completes our proof.
§ QUASAR CONVEX FUNCTIONS
Quasar convex functions <cit.> are another interesting class of possibly nonconvex functions which achieve global minima at all their stationary points. Thus, they fall under the class of invex functions. Here, we propose an η for the class of quasar convex functions which can be used to construct updates in Algorithm <ref>. Below, we prove Lemma <ref>.
Lemma <ref>
For any ν≥ 0, there exists a β∈ [0, 1] such that quasar convex functions are η-invex for η(y, x) = β/ν(1 - β) (y - x).
First, we use the result from Lemma 2 of <cit.> to show that for any ν≥ 0, there exists a β∈ [0, 1], such that
β∇ f(x)^ (y - x - β y/1 - β) ≤ν (f(y) - f(x))
We can simplify equation (<ref>) to write
f(y) ≥ f(x) + ∇ f(x)^β/ν (1 - β) (y - x)
This completes our proof.
We note that β can be computed efficiently using a binary-search algorithm(refer to Algorithm 2 of <cit.>).
§ PROOF OF THEOREMS AND LEMMAS
Before we begin to solve the optimization problem (<ref>), we will prove some technical results to understand the problem in a better way. First, we will show that the constraint set is indeed an η_2-invex set. We will do it in two parts.
Let ϕ:→ be an η-invex function, then ϕ(x) ≤ c is an η-invex set for any c ∈.
Let γ_x, y be the underlying curve connecting x, y ∈ corresponding to η(y, x) satisfying equation (<ref>).
Using definition <ref>, we can redefine invex functions on an invex set ⊆ as following:
Let ⊆ be an invex set. A real-valued differentiable function ϕ:→ is called invex if
ϕ(γ_x, y(t)) ≤ (1 - t) ϕ(x) + t ϕ(y), ∀ x, y ∈, ∀ t ∈ [0, 1]
Definitions <ref> and <ref> are connected with each other through the relationship between γ_x, y and η(y, x) in equation (<ref>).
Let = { x ∈ | ϕ(x) ≤ c }. We take x, y ∈. We will then need to show that γ_x, y(t) ∈, ∀ t ∈ [0, 1]. Using definition
<ref>
ϕ(γ_x,y(t)) ≤ (1 - t) ϕ(x) + t ϕ(y), ∀ x, y ∈, ∀ t ∈ [0, 1]
≤ (1 - t) c + t c = c
It follows that γ_x, y(t) ∈, ∀ t ∈ [0, 1].
Next, we use Lemma <ref> to show that the constraint set is an η-invex set.
Let g_i:→, ∀ i ∈{ 1, ⋯, m } be η-invex functions, then the set = ∩_i=1^m _i is η-invex where _i = { x ∈ | g_i(x) ≤ 0 }.
Let x, y ∈, then by definition x, y ∈_i, ∀ i ∈{ 1, ⋯, m }. We know from Lemma <ref>, that _i's are η-invex set. Let γ_x, y be the underlying curve connecting x, y. Then, it follows that γ_x, y(t) ∈_i, ∀ i ∈{1, ⋯, m}, ∀ t ∈ [0, 1]. Thus, γ_x, y(t) ∈, ∀ t ∈ [0, 1].
Theorem <ref>(Convergence of L-smooth functions.)
Let f be a L-smooth function and f^* = min_x ∈ f(x) ≥ B for some B > - ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L), then
lim_k →∞∇ f(x_k) = 0 ,
Since f is an L-smooth function. We have
f(x_k+1) ≤ f(x_k) + η(x_k+1, x_k)∇ f(x_k)_x_k + L/2η(x_k+1, x_k) ^2
Using Algorithm <ref>, we have η(x_k+1, x_k) = - α_k ∇ f(x_k). Thus,
f(x_k+1) ≤ f(x_k) - α∇ f(x_k) ∇ f(x_k)_x_k + L α^2/2∇ f(x_k) ^2
Since α∈ (0, 2/L), it follows that
α (1 - Lα/2) ∇ f(x_k) ^2 ≤ f(x_k) - f(x_k+1)
After telescoping sum and simplification, we get
∑_k=0^∞∇ f(x_k) ^2 ≤f(x_0) - B/α (1 - L α/2)
Since right-hand side of equation (<ref>) is finite, it follows that lim_k →∞∇ f(x_k) = 0.
Theorem <ref>(Convergence of invex functions.)
Let f: → be an L-smooth η-invex function such that satisfies Assumption <ref>. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞ and η(x_0, x^*) ≤ M < ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L), then f(x_k) converges to f(x^*) at the rate (1/k).
We apply Equation (<ref>) by taking x = x_k, y = x_k+1 and z = x^*.
η(x_k+1, x^*) ^2 ≤ η(x_k, x^*) ^2 + b η(x_k+1, x_k) ^2 - c η(x_k+1, x_k) η(x^*, x_k)_x_k
From Algorithm <ref>, η(x_k+1, x_k) = -α∇ f(x_k).
η(x_k+1, x^*) ^2 ≤ η(x_k, x^*) ^2 + b α^2 ∇ f(x_k) ^2 + c α∇ f(x_k) η(x^*, x_k)_x_k
Note that since f is η-invex, we have f(x^*) ≥ f(x_k) + η(x^*, x_k)∇ f(x_k). Thus,
η(x_k+1, x^*) ^2 ≤η(x_k, x^*) ^2 + b α^2 ∇ f(x_k) ^2 + c α (f(x^*) - f(x_k))
c α (f(x_k) - f(x^*)) ≤η(x_k, x^*) ^2 - η(x_k+1, x^*) ^2 + b α^2 ∇ f(x_k) ^2
Summing over T terms, we get
c α∑_k=0^T (f(x_k) - f(x^*)) ≤η(x_0, x^*) ^2 - η(x_T+1, x^*) ^2 + b α^2 ∑_k=0^T ∇ f(x_k) ^2
Since α∈ (0, 2/L), we make two observations from equation (<ref>),
∑_k=0^T ∇ f(x_k) ^2 ≤ f(x_0) - f(x_T+1)/α (1 - Lα/2)
f(x_k+1) - f(x_k) ≤α (-1 + Lα/2) ∇ f(x_k) ^2 ≤ 0
By using the observations in equation (<ref>), it follows that
c α T ( f(x_T) - f(x^*) ) ≤η(x_0, x^*) ^2 + b α (f(x_0) - f(x^*)) /1 - Lα/2 .
Note that using L-smoothness condition and noticing that ∇ f(x^*) = 0, we can show that
f(x_0) - f(x^*) ≤L/2η(x_0, x^*) ^2
Thus,
f(x_T) - f(x^*) ≤1/T1/cα (1+
b α L /2 - Lα) M^2
This proves our claim.
Theorem <ref>(Convergence of strongly invex functions.)
Let f:→ be an L-smooth μ-strongly η-invex function such that satisfies Assumption <ref>. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞, η(x_0, x^*) ≤ M < ∞ and η(y, x) _x^2 ≤ R η(x, y) _y^2, ∀ x, y ∈ for some R > 0. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, min (2/ R μ c , c/2bL)), then
η(x_k+1 , x^*) ^2 ≤ (1 - c α R μ/2)^k+1 M^2 .
We begin our proof by proving an auxiliary lemma.
If f is L-smooth then
f(x^*) - f(x) ≤ -1/2L∇ f(x) ^2, ∀ x ∈ .
We can always find a y ∈ such that η(y, x) = - 1/L∇ f(x). Using equation (<ref>) we have,
f(y) ≤ f(x) - 1/L∇ f(x)∇ f(x)_x + L/21/L∇ f(x) ^2
=f(x) - 1/2L∇ f(x) ^2
Clearly, f(x^*) ≤ f(y), thus
f(x^*) - f(x) ≤ -1/2L∇ f(x) ^2 .
This proves our claim.
Now using Assumption <ref> for x = x_k, y = x_k+1 and z = x^*, we have
η(x_k+1, x^*) ^2 ≤ η(x_k, x^*) ^2 + b η(x_k+1, x_k) ^2 - c η(x_k+1, x_k) η(x^*, x_k)_x_k
Now since η(x_k+1, x_k) = -α∇ f(x_k), we have
η(x_k+1, x^*) ^2 ≤ η(x_k, x^*) ^2 + b α^2 ∇ f(x_k) ^2 + c α∇ f(x_k) η(x^*, x_k)_x_k
Using the strong convexity of f and setting y = x^* and x = x_k, we have
η(x_k+1, x^*) ^2 ≤ η(x_k, x^*) ^2 + b α^2 ∇ f(x_k) ^2 + c α (f(x^*) - f(x_k) - μ/2η(x^*, x_k) ^2)
Using the condition that η(x^*, x_k) ^2 ≤ R η(x_k, x^*) ^2, we have
η(x_k+1, x^*) ^2 ≤ (1 -c α R μ/2 ) η(x_k, x^*) ^2 + b α^2 ∇ f(x_k) ^2 + c α (f(x^*) - f(x_k))
Using L-smoothness of f and Lemma <ref>, we have
η(x_k+1, x^*) ^2 ≤ (1 -c α R μ/2 ) η(x_k, x^*) ^2 - α ( - 2 b α L + c )(f(x_k) - f(x^*))
Taking α≤min (2/ R μ c , c/2bL), we get
η(x_k+1, x^*) ^2 ≤ (1 -c α R μ/2 ) η(x_k, x^*) ^2
We prove our result by unrolling the recurrence in equation (<ref>).
§.§ Convergence of Projected Invex Gradient Descent Method
Next, we will show that once Assumption <ref> is satisfied by the projection operator, results from Theorems <ref> and <ref> extend nicely to Algorithm <ref>.
Let f: → be an L-smooth η_1-invex function such that satisfies Assumption <ref>. Let ⊆ be an η_2-invex set. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞ and η_1(x_0, x^*) ≤ M < ∞. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, 2/L) and with projection operator satisfying Assumption <ref>, then f(x_k) converges to f(x^*) at the rate (1/k).
First notice that since x^* ∈, we have ρ_η_2(x^*) = x^*. We follow the same proof technique as Theorem <ref> until equation (<ref>) which becomes:
c α (f(x_k) - f(x^*)) ≤η_1(x_k, x^*) ^2 - η_1(y_k+1, x^*) ^2 + b α^2 ∇ f(x_k) ^2
Using Assumption <ref>, we know that η_1(y_k+1, x^*) ^2 ≥η_1(ρ_η_2(y_k+1), x^*) ^2 = η_1(x_k+1, x^*) ^2 and thus remaining steps of the proof follow.
We have a similar result for μ-strongly η-invex functions.
Let f:→ be an L-smooth μ-strongly η_1-invex function such that satisfies Assumption <ref>. Let ⊆ be an η_2-invex set. Furthermore, let x^* = min_x ∈ f(x) such that f(x^*) > -∞, η_1(x_0, x^*) ≤ M < ∞ and η_1(y, x) _x^2 ≤ R η_1(x, y) _y^2, ∀ x, y ∈ for some R > 0. If x_k is computed using Algorithm <ref> with α_k = α∈ (0, min (2/ R μ c , c/2bL)) and with projection operator satisfying Assumption <ref>, then
η_1(x_k+1 , x^*) ^2 ≤(1 - c α R μ/2)^k+1 M^2 .
Again, we notice that since x^* ∈, we have ρ_η_2(x^*) = x^*. We follow the same proof technique as Theorem <ref> until equation (<ref>) which becomes:
η_1(y_k+1, x^*) ^2 ≤ (1 -c α R μ/2 ) η_1(x_k, x^*) ^2
Using Assumption <ref>, we note that η_1(x_k+1, x^*) ^2 = η_1(ρ_η_2(y_k+1), x^*) ^2 ≤η_1(y_k+1, x^*) ^2 and thus remaining steps of the proof follow.
Theorem <ref>(Projection contracts for geodesically convex sets with negative curvature.)
Let ρ be the projection operator as defined in <ref> on a closed geodesically convex subset of a simply connected Riemannian manifold with sectional curvature upper bounded by κ≤ 0. Then the projection satisfies Assumption <ref>.
First note that since geodesics are constant velocity curves, there exists a parameterization such that d(y, x) = γ̇_x, y(0) = η(y, x) where d(y, x) is the length of geodesic between x and y. Thus, it only remains to show that d(y, x) contracts for geodesically convex sets (sometimes known as totally convex sets) which follows from Lemma 11.2 of <cit.>.
Lemma <ref>
The function h(W) = - log (I - W ∘ W), ∀ W ∈𝒲 is η-invex for η(U, W) = -1/2 ((I - W ∘ W) (log(I - U ∘ U) - log(I - W ∘ W))) ⊘ W.
The invexity of h(W) is already shown by <cit.>. Here, we will verify that our proposed η satisfies equation (<ref>). Note that ∇ h(W) = 2 (I - W ∘ W)^-∘ W. Then ∀ U, W ∈𝒲,
h(U) - h(W) - η(U, W)∇ h(W) = - log (I - U ∘ U) + log (I - W ∘ W) -
-1/2 ((I - W ∘ W) (log(I - U ∘ U) - log(I - W ∘ W))) ⊘ W2 (I - W ∘ W)^-∘ W
= 0
This validates our claim.
Lemma <ref>
Assume that WU, then functions f(t, W, U) = ∑_i=1^n 1/2S_iW + U + 1/2 t_i S_iW - U, g(t, W, U) = (W) _1 and h(t, W, U) = (U) _1 are η_1-invex for
η_1((t, W, U), (t, W, U)) = [ τ∘ (t - t); W - W; U - U ] ,
where τ(W, U, W, U) ∈^n such that τ_i = S_iW - U/S_iW - U.
The invexity of the objective function in (<ref>) is already shown by <cit.>. It suffices to verify that our proposed η satisfies equation (<ref>) for f(t, W, U), g(t, W, U) and h(t, W, U). It can be trivially verified that our η_1 works for g(t, W, U) and h(t, W, U) due to their convexity. For f(t, W, U),
∂ f/∂ t_i = 1/2S_iW - U
∂ f/∂ W = ∑_i=1^n t_i + 1/2 S_i
∂ f/∂ U = ∑_i=1^n 1 - t_i/2 S_i
It is easy to verify that
f(t, W, U) - f(t, W, U) - η_1(·, ·)∇ f(t, W, U) = ∑_i=1^n 1/2S_iW + U + 1/2 t_i S_iW - U -
∑_i=1^n 1/2S_iW + U - 1/2t_i S_iW - U - ∑_i=1^n τ_i 1/2S_iW - U - W - W∑_i=1^n t_i + 1/2 S_i -
U - U∑_i=1^n 1 - t_i/2 S_i
= 0
|
http://arxiv.org/abs/2307.04028v1 | 20230708183125 | Measuring the Success of Diffusion Models at Imitating Human Artists | [
"Stephen Casper",
"Zifan Guo",
"Shreya Mogulothu",
"Zachary Marinov",
"Chinmay Deshpande",
"Rui-Jie Yew",
"Zheng Dai",
"Dylan Hadfield-Menell"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
[
Measuring the Success of Diffusion Models at Imitating Human Artists
equal*
Stephen Casperequal,MIT
Zifan Guoequal,MIT
Shreya MogulothuMIT
Zachary MarinovMIT
Chinmay DeshpandeHarvard
Rui-Jie YewMIT,Brown
Zheng DaiMIT
Dylan Hadfield-MenellMIT
MITMIT
HarvardHarvard University
BrownBrown University
Stephen [email protected]
Machine Learning, ICML
0.3in
]
§ OVERVIEW
Modern diffusion models have set the state-of-the-art in AI image generation.
Their success is due, in part, to training on Internet-scale data which often includes copyrighted work. This prompts questions about the extent to which these models learn from, imitate, or copy the work of human artists.
This work suggests that questions involving copyright liability should factor in a model's capacity to imitate an artist.
Tying copyright liability to the capabilities of the model may be useful given the evolving ecosystem of generative models.
Specifically, much of the legal analysis of copyright and generative systems focuses on the use of protected data for training <cit.>.
However, generative systems are often the result of multiple training processes. As a result, the connections between data, training, and the system are often obscured.
In our approach, we consider simple image classification techniques to measure a model's ability to imitate specific artists. Specifically, we use Contrastive Language-Image Pretrained (CLIP) <cit.> encoders to classify images in a zero-shot fashion.
Our process first prompts a model to imitate a specific artist. Then, we test whether CLIP can be used to reclassify the artist (or the artist's work) from the imitation. If these tests match the imitation back to the original artist, this suggests the model can imitate that artist's expression.
Our approach is simple and quantitative. Furthermore, it uses standard techniques and does not require additional training. We demonstrate our approach with an audit of Stable Diffusion's <cit.> capacity to imitate 70 professional digital artists with copyrighted work online. When Stable Diffusion is prompted to imitate an artist from this set, we find that the artist can be identified from the imitation with an average accuracy of 81.0%. Finally, we also show that a sample of the artist's work can be matched to these imitation images with a high degree of statistical reliability. Overall, these results suggest that Stable Diffusion is broadly successful at imitating individual human artists. Code is available https://colab.research.google.com/drive/1ScHo9uMdUgId0DlSr4W4RgnMD44dLiku?usp=sharinghere.
§ BACKGROUND
Contrastive Language-Image Pretraining (CLIP): CLIP <cit.> is a technique for training AI systems that encode images and text into fixed-length vector representations.
CLIP image and text encoders are trained to produce similar encodings of image/caption pairs and dissimilar encodings of image/caption non-pairs.
The more geometrically distant two encodings of images or captions are, the less related they are according to the encoder, and vice versa.
Using this principle, <cit.> introduced a method to classify an image among a set of labels based on the distances between encodings. We use this method in our proposed test.
Diffusion Models: Diffusion models <cit.> such as Stable Diffusion <cit.> and Midjourney <cit.>, are capable of generating images from arbitrary, user-specified prompts.
Their success has largely been due to training on large amounts of text/image data, often including copyrighted works <cit.>.
Modern image-generation diffusion models are trained using CLIP-style encoders.
When given an encoding of a caption, a diffusion model is trained to generate an image corresponding to the caption <cit.>.
Accordingly, a diffusion model that generates images from these embeddings is trained to be the inverse of a CLIP image encoder.
Legal Motivation: In the United States, <cit.> established that copyright infringement “is measured by considering the qualitative and quantitative significance of the copied portion in relation to the plaintiff’s work as a whole”. However, the subjective nature of these determinations makes practical enforcement complicated.
<cit.>.
In evaluating copyright questions involving AI systems, legal analyses have focused on how copyrighted work is used in the system's training data <cit.>, but such a focus on training data does not connect liability to an AI system's ability to copy an artist.
In contrast, we show how standard image classification techniques can be used to help determine how successful AI image generators are at imitating individual human artists.
This approach is consistent, quantitative, and connected to the capabilities of the resulting AI system.
Our goal, however, is not to automate determinations of infringement but to demonstrate how tried and tested image classification techniques from machine learning can be used to analyze legal claims.
§ EXPERIMENTS
We conduct two complementary experiments to evaluate Stable Diffusion's ability to imitate human artists. First, we classify human artists from imitations of their work, and second, we match real work from human artists to imitations. Both experiments suggest that Stable Diffusion is broadly successful at imitating human artists.
§.§ Identifying Artists from Imitations
Method: We used CLIP encoders to classify artists from Stable Diffusion's imitations of them. We selected 70 artists from the LAION-aesthetics dataset <cit.>, the dataset used to train Stable Diffusion. We selected these 70 as artists who may potentially be harmed by digital imitations using several criteria: each artist is alive, has a presence on digital art platforms (Instagram, DeviantArt, and ArtStation), publishes artwork or sells their artwork (e.g., prints or digital works), and has more than 100 images in the LAION dataset.
Figure <ref> outlines our method.
We prompted https://huggingface.co/runwayml/stable-diffusion-v1-5Stable Diffusion (v1.5) to generate images in the style of each artist, using prompts of the form “Artwork from <artist’s name>”.
Example images are in Figure <ref>.
We then used https://huggingface.co/openai/clip-vit-base-patch32CLIP encoders to classify each image among a set of 73 labels.
The 73 labels consisted of each of the 70 artist's prompts (“Artwork from <artist’s name>”) plus three default labels: “Artwork”, “Digital Artwork”, and “Artwork from the public domain.”
These additional labels lend insight into how confident CLIP is that an image imitates a particular artist's style instead of some more generic style.
We then classified each imitation image among these labels using the technique from <cit.>.
CLIP-based classification produces a probability of an image matching each label, and we evaluate the model on the correctness of its most-likely prediction and confidence in the correct artists.
Results: We repeated the experiment with the 70 artists ten times to reduce the effect of random variation. On average, CLIP correctly classified 81.0% of the generated images as works made by artists whose names were used to generate them.
Over the ten trials, 69 of the 70 artists were correctly classified in a plurality of the ten trials.
Overall, these results suggest that Stable Diffusion has a broad-ranging ability to imitate the styles of individual artists.
We compared these results to two baselines.
First, we implemented a random-name baseline by running the same experiment with 70 random names from a https://randomwordgenerator.com/name.phprandom name generator.
Since Stable Diffusion was not trained on artists with these names (unless a random name is coincidentally the same as some artist's), this experiment serves as a proxy for how Stable Diffusion would handle artists not in its training data.
In this case, only 6 names (8.6%) were guessed correctly.
Second, a random guess would only result in a successful classification every 1 in 73 attempts (1.4%) on average.
We visualize results from our main experiment alongside the controls in Figure <ref>.
Results are Robust to Different Sets of Artists: To test whether our 70 artists were especially classifiable, we ran the original experiment but with a larger set of indiscriminately-selected artists and found similar results. We selected the 250 artists with the highest number of images in the LAION dataset and found that CLIP correctly classified 81.2% of the images.
This demonstrates that successful classification transcends a particular specific set of artists.
§.§ Matching Artwork to Imitations
Method: Our first experiment tested how easily artists could be identified from diffusion model imitations of them.
To provide a complementary perspective, we also directly study the similarity of artists' digital works to Stable Diffusion's imitations of them. For each of the 70 artists, we retrieve the top result obtained by Google Image searching “<artist's name> art.”
As before, we then use Stable Diffusion to generate 10 images for each artist with the prompt “Artwork from [artist's name].” We then compare the real images and generated images. Distances are measured by first encoding images
using the CLIP image encoder and calculating the cosine distance between encodings.
Results: For each artist, we calculate whether real images from artists are more similar to imitations of that artist or other artists. The significance was calculated using a rank sum test with a Bonferroni correction factor of 70. Results are in Figure <ref>.
90% (63/70) of the experiments produce p values less than 0.05. This compares to an average of 22.8% (16/70) for a control experiment using random artist assignments of real images. These results further support that Stable Diffusion is broadly successful at imitating artists.
§ CONCLUSION
We have demonstrated how AI image classification can help to measure the success of diffusion models imitating human artists.
We argue that these methods can provide a practical way to tie questions about copyright liability to the capabilities of a model instead of its training data alone.
By matching imitation images to both artists' names and works, we find that Stable Diffusion is broadly successful at imitating human digital artists.
We hope that future work can use image classification to analyze legal claims and to test defenses against AI imitation of copyrighted work.
§ ACKNOWLEDGEMENTS
We thank Taylor Lynn Curtis and Lennart Schulze for feedback.
icml2023
|
http://arxiv.org/abs/2307.07659v1 | 20230714233904 | Stabilized Isogeometric Collocation Methods for Hyperbolic Conservation Laws | [
"Ryan M. Aronson",
"John A. Evans"
] | math.NA | [
"math.NA",
"cs.NA"
] |
Stabilized Isogeometric Collocation Methods for Hyperbolic Conservation Laws]Stabilized Isogeometric Collocation Methods for Hyperbolic Conservation Laws
[1]Ryan M. [email protected]
2]John A. [email protected]
*[1]Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA, 94305
[2]Ann and H.J. Smead Aerospace Engineering Sciences, University of Colorado Boulder, Boulder, CO, 80303
We introduce stabilized spline collocation schemes for the numerical solution of nonlinear, hyperbolic conservation laws. A nonlinear, residual-based viscosity stabilization is combined with a projection stabilization-inspired linear operator to stabilize the scheme in the presence of shocks and prevent the propagation of spurious, small-scale oscillations. Due to the nature of collocation schemes, these methods possess the possibility for greatly reduced computational cost of high-order discretizations. Numerical results for the linear advection, Burgers, Buckley-Leverett, and Euler equations show that the scheme is robust in the presence of shocks while maintaining high-order accuracy on smooth problems.
[
[
August 12, 2023
===================
§ INTRODUCTION
While originally introduced as an attempt to close the gap between design and analysis, a number of other interesting properties and advantages of Isogeometric Analysis (IGA) <cit.> have been discovered. The smooth spline basis functions used in IGA have been shown to have improved approximation power when compared to the C^0 polynomial basis functions used in standard Finite Element Analysis (FEA) <cit.>. Moreover, smooth basis functions enable the development of collocation methods based on the strong form of the governing partial differential equation (PDE) <cit.>. Unlike standard Galerkin methods, collocation schemes do not require computations of element integrals, which are typically evaluated using expensive Gauss quadrature schemes. Instead, the strong form of the equations is evaluated at a set of collocation points to determine the numerical solution. The potential of collocation for reducing the cost of simulations has already been demonstrated <cit.>, and these schemes are ideally suited for explicit dynamics simulations where the cost of computing the residual integrals can dominate the computational time. This is especially true for the simulation of nonlinear PDEs using high-order bases.
In the context of incompressible fluid dynamics, there was interest in using spline basis functions for analysis even before the invention of IGA <cit.>. The improved dispersion properties of spline bases when compared to standard, high-order bases make these types of schemes attractive for simulations of turbulent flows, and collocation schemes can be used to efficiently evaluate the nonlinear convection term. Recently, B-spline collocation has been used in conjunction with spectral methods for simulations of turbulent channel flows <cit.>, and the IGA community has continued to investigate both Galerkin and collocation discretization schemes based purely on splines. For example, in <cit.> we developed divergence-conforming B-spline collocation schemes and in <cit.> we described the construction of stabilized spline collocation schemes which combat both advection and pressure instabilities.
Much less common in the literature are spline-based discretization schemes developed for the simulation of compressible flow problems. In this case a nonlinear stabilization is typically required to ensure high-order discretization schemes remain well behaved in the presence of discontinuities, such as shocks. For example, in <cit.> the Algebraic Flux Correction method was adapted to spline-based Galerkin discretizations of conservation laws. Alternatively, <cit.> used a discontinuous Galerkin IGA method coupled with an artificial viscosity-based shock capturing scheme of the form presented in <cit.> for nonlinear stabilization. As both of these schemes are based on the Galerkin method, however, the integrals of the nonlinear flux terms, which must be computed at every time step, can render the methods quite expensive.
In this work, we extend the use of spline collocation discretization schemes to hyperbolic conservation laws, as a first step towards modeling compressible flows. Much like the setting of explicit, nonlinear dynamics, the use of collocation schemes for conservation laws has the potential to greatly the improve computational complexity of high-order methods, both due to the lack of required numerical integrations as well as the tensor product structure of the mass matrix that arises from collocation. We stabilize the collocation schemes in the presence of shocks using an artificial viscosity, like in <cit.>. Unlike that work, however, this viscosity is constructed proportionally to the residual of the governing PDE, providing stabilization in regions near shocks while adding little viscosity in smooth regions where the solution is well resolved. This is inspired by the work in <cit.>, where similar ideas have found success in creating high-order, stable schemes based on a variety of discretizations. We also introduce a novel linear stabilization technique inspired by projection stabilization <cit.>, which effectively suppresses small-scale oscillations that can sometimes appear upwind of the shock, even when the artificial viscosity is included. This stabilization is essentially an alternative to the Streamline-Upwind-Petrov-Galerkin inspired stabilization developed in <cit.>, but with the advantage of having no unsteady term in the stabilization which could make explicit time stepping more complex.
The rest of the work is laid out as follows: First we describe the details of the basic discretization, starting with the strong forms of the PDEs we seek to solve. In this work we examine multiple scalar conservation laws but only one system of conservation laws, namely the compressible Euler equations. We then discuss the details of the spatial discretization process, including the construction of B-spline basis functions and appropriate collocation points. The remainder of Section 2 discusses explicit time integration and the speedups that are possible from the choice of collocation in space. Section 3 then discusses the stabilization schemes, starting with the nonlinear artificial viscosity construction, followed by our novel linear stabilization technique. Finally, Section 4 summarizes the results obtained for a variety of canonical test problems and shows the promise of the method for high-order Computational Fluid Dynamics (CFD) applications.
§ ISOGEOMETRIC COLLOCATION FORMULATION OF CONSERVATION LAWS
To begin, we state the strong formulations of the partial differential equations we are interested in solving in this work. A full problem statement for an arbitrary scalar conservation law posed on the domain Ω⊂ℝ^d can be stated as
{5in Given flux function 𝐟 : ℝ→ℝ^d, boundary operator and data 𝒢 and g, initial condition ϕ_0 : Ω→ℝ, and final time t_f ∈ℝ, find ϕ : Ω× [0, t_f] →ℝ such that:
∂ϕ/∂ t(𝐱, t) + ∇·𝐟(ϕ(𝐱, t)) = 0 Ω,
𝒢ϕ(𝐱, t) = g(𝐱, t) ∂Ω,
ϕ(𝐱, 0) = ϕ_0(𝐱) Ω.
.
The first equation represents the conservation law itself, while the second and third specify the boundary and initial conditions, respectively. The specification of the flux function defines the conservation law being solved, and the conserved quantity ϕ can take on a different physical meaning in each case. In addition to scalar conservation laws, we also consider the compressible Euler equations, which can be expressed in conservative form as
{5in Given boundary operator and data 𝒢 and 𝐠, initial conditions ρ_0 : Ω→ℝ, ρ𝐮_0 : Ω→ℝ^d, and E_0 : Ω→ℝ, and final time t_f ∈ℝ, find ρ : Ω× [0, t_f] →ℝ, ρ𝐮 : Ω× [0, t_f] →ℝ^d, and E : Ω× [0, t_f] →ℝ such that:
∂/∂ t[ ρ; ρ𝐮; E ](𝐱, t)
+ ∇·[ ρ𝐮; ρ𝐮⊗𝐮 + p 𝐈; 𝐮 (E + p) ](𝐱,t)
= 0Ω,
𝒢[ ρ; ρ𝐮; E ](𝐱, t) = 𝐠(𝐱, t) ∂Ω,
[ ρ; ρ𝐮; E ](𝐱, 0) = [ ρ_0; ρ𝐮_0; E_0 ](𝐱) Ω,
p(𝐱,t) = (γ - 1) (E(𝐱,t) - 1/2ρ𝐮(𝐱,t) ·𝐮(𝐱,t)),
T(𝐱,t) = p(𝐱,t)/ρ(𝐱,t).
.
Here ρ is the density of the fluid, 𝐮 is the fluid velocity, E is the total energy, and p is the pressure. The first three equations represent the conservation of mass, momentum, and energy, respectively. The system of equations is closed by assuming the fluid is an ideal gas with constant heat capacity, which yields the last 2 constitutive laws.
In order to discretize the problems above in space with a collocation scheme, we require a set of proper basis functions {N_i}_i=1^n_d and collocation points {𝐱_i}_i=1^n_d, where n_d is the total number of degrees of freedom within the discrete space. A system of ordinary differential equations in time is formed by assuming the discrete solution is represented by a linear combination of these basis functions at any instant in time, with coefficients determined by requiring the strong form of the PDE to hold at every collocation point. We discuss these steps in further detail below.
§.§ B-Spline Basis Functions
For a collocation scheme to be well-defined, the basis functions {N_i}_i=1^n_d must be sufficiently smooth so that the strong form residual can be evaluated. Note that the standard C^0 piecewise polynomial basis functions used in FEA do not satisfy this requirement. The high-order spline basis functions used in IGA, however, can easily be constructed with sufficient regularity. In this work we will only utilize B-spline basis functions, but the methods trivially extend to NURBS basis functions as well.
In 1D, the specification of the polynomial order k and the knot vector Ξ = {ξ_1, ... ξ_n+k+1} fully describes the B-spline basis functions. These functions are evaluated through the Cox-de Boor recursion, starting with the k = 0 basis functions
N_i,0(ξ) =
1 ξ_i ≤ξ≤ξ_i+1
0 otherwise,
and higher-order bases defined via
N_i,k(ξ) = ξ - ξ_i/ξ_i+k - ξ_iN_i,k-1(ξ) + ξ_i+k+1 - ξ/ξ_i+k+1 - ξ_i+1N_i+1,k-1(ξ).
At any knot in the knot vector Ξ, the basis functions are C^k-ℓ-continuous, where ℓ is the multiplicity of the knot. Thus it is straightforward to define basis functions with suitable regularity for collocation. In this work we will always utilize maximal continuity splines, thus our knot vectors never contain repeated interior knots. We also utilize open knot vectors, meaning that the first and last knot values are repeated k+1 times, making the spline interpolatory at these locations.
B-spline basis functions in higher spatial dimensions are constructed by simply taking the tensor product of one dimensional B-spline bases in each direction <cit.>. In general, the degrees and knot vectors defining the basis in each parametric direction can differ, but for simplicity, most of the results presented in this work only consider tensor product bases with the same 1D basis in each direction. For notational convenience we denote a spline space defined on a domain Ω⊂ℝ^d as S^h = { N_i}_i = 1^n_d, where the functions N_i : Ω→ℝ are B-spline basis functions and n_d is the total number of functions in the discrete space.
§.§ The Greville Abcissae
With the choice of basis functions made, we must now select a set of collocation points to obtain a well-posed scheme. By well-posed, we mean that we desire a set of collocation points which produce a non-singular interpolation operator. There are several possible choices for these points in the isogeometric collocation setting, such as the Greville abscissae <cit.>, the Demko abscissae <cit.>, or the Cauchy-Galerkin points <cit.>. The Greville abscissae are the simplest choice, as they are computed in 1D simply through averages of knot values via
ξ̂_̂î = ξ_i+1 + ... + ξ_i+k/k.
In higher dimensions, the Greville points of the tensor product space are simply the tensor product of the 1D Greville abscissae. In this work we only consider collocation at Greville points, though we note that there are specific cases with large polynomial degrees of 20 or higher and certain non-uniform meshes where the scheme will no longer be well-posed <cit.>. The Demko abscissae provably produce a non-singular interpolation operator for all cases, but require an iterative approximation to compute. Finally, we note that the main advantage of the use of the Cauchy-Galerkin points is recovery of optimal L^2 convergence rates for odd basis degrees. However, in this work we only consider first-order conservation laws, and <cit.> have shown that collocating first-order PDEs results in the same optimal asymptotic convergence rates, even when the Greville points are utilized.
§.§ Spatial Discretization
With a complete description of the elements required for spline collocation schemes, we move on to apply this technique for the spatial discretization of conservation laws. We consider the specific case of scalar conservation laws and assume fully Dirichlet boundary conditions for simplicity. Then the semi-discrete problem is obtained by assuming the numerical solution lies in a predefined spline space S^h with Greville abscissae {𝐱_i}_i = 1^n_d, and the strong form of the problem holds at every collocation point. Defining the space S^h_T = {ϕ^h : Ω× [0, t_f] →ℝ : ϕ^h(·, t) ∈ S^h ∀ t ∈ (0,t_f) }, the semi-discrete form can be written as
{5in Given 𝐟 : ℝ→ℝ^d, g: ∂Ω× [0,t_f] →ℝ, initial condition ϕ_0 : Ω→ℝ, and final time t_f ∈ℝ, find ϕ^h ∈ S^h_T such that:
∂ϕ^h/∂ t(𝐱_i, t) + ∇·𝐟 (ϕ^h(𝐱_i, t)) = 0 ∀𝐱_i ∈Ω,
ϕ^h(𝐱_i, t) = g(𝐱_i, t) ∀𝐱_i ∈∂Ω.
ϕ^h(𝐱_i, 0) = ϕ_0(𝐱_i) ∀𝐱_i ∈Ω
.
This semi-discrete form is simply a system of ordinary differential equations, with one equation per collocation point. Integration in time can then be done with any standard scheme, and is discussed further below. For completeness we also include the semi-discrete form of the Euler equations subjected to Dirichlet boundary conditions. Denoting by 𝐒^h_T the vector-valued space wherein each component belongs to S^h_T, we arrive at
{5in Given 𝐠 : ∂Ω× [0,t_f] →ℝ^d+2, initial conditions ρ_0 : Ω→ℝ, ρ𝐮_0 : Ω→ℝ^d, and E_0 : Ω→ℝ, and final time t_f ∈ℝ, find ρ^h ∈ S^h_T, ρ𝐮^h ∈𝐒^h_T, and E^h ∈ S^h_T such that:
∂/∂ t[ ρ^h; ρ𝐮^h; E^h ](𝐱_i, t)
+ ∇·[ ρ𝐮^h; ρ𝐮^h ⊗𝐮^h + p^h 𝐈; 𝐮^h (E^h + p^h) ](𝐱_i,t)
= 0 ∀𝐱_i∈Ω,
[ ρ^h; ρ𝐮^h; E^h ](𝐱_i, t) = 𝐠(𝐱_i, t) ∀𝐱_i ∈∂Ω,
[ ρ^h; ρ𝐮^h; E^h ](𝐱_i, 0) = [ ρ_0; ρ𝐮_0; E_0 ](𝐱_i) ∀𝐱_i ∈Ω,
p^h(𝐱_i,t) = (γ - 1) (E^h(𝐱_i,t) - 1/2ρ𝐮^h(𝐱_i,t) ·𝐮^h(𝐱_i,t)),
T^h(𝐱_i,t) = p^h(𝐱_i,t)/ρ^h(𝐱_i,t).
.
The enforcement of initial conditions deserves slightly more attention in the case of discontinuous initial data. Simply collocating this data as stated above will yield an oscillatory solution due to Gibb's phenomena near the discontinuities. However, we have found in many of our studies that the inclusion of the nonlinear stabilization described in the following section effectively suppressed these initial oscillations in relatively few time steps, seemingly without adversely affecting the solution quality at later times. One can also collocate a smoothed initial condition to remove these oscillations, or use the initial data to set the control coefficients of the spline solution directly, bypassing the need for interpolation altogether. This is also how discontinuous boundary conditions were enforced in <cit.>.
§.§ Time Integration - Residual Computation
In this work we are interested in explicit time integration schemes, such as RK4, which do not require a nonlinear solution procedure at every time step in the presence of nonlinear flux functions. The first main step of explicit time integration is the construction of the spatial residual vector, and this process can be quite costly. In Galerkin methods, this involves the computation of terms of the form
∫_Ω_E (∇·𝐟) ψ^h dΩ,
over each element Ω_E, for all ψ^h in the discrete test function space. As these integrals are usually approximated using Gauss quadrature rules, the integral evaluation cost scales like O(k+1)^2d per element.
A major cost advantage of collocation methods is the replacement of these costly integrations with strong form evaluations. For a standard spline collocation scheme the number of operations per element instead scales like O(k+1)^d per element, as most elements include only O(1) collocation points.
Inspired by finite difference schemes for conservation laws, in which the conservative form of the PDE is discretized to capture correct shock speeds, for conservation laws we also choose to collocate the conservative form. Thus we require a method to evaluate the divergence of the flux. For our method we use the flux evaluated on the collocation points to fit a spline surface in the same discrete space as the unknowns, allowing for the evaluation of partial derivatives. This process involves the application of the inverse of the mass matrix, which can be done efficiently in the same asymptotic complexity as the residual evaluations, as described below.
§.§ Time Integration - The Mass Matrix
Like a standard Galerkin finite element scheme, isogeometric collocation schemes will lead to non-diagonal mass matrices. Thus linear solves are required at every time step, not only to fit a spline surface to the flux values but also to obtain the updates to the solution coefficients in time.
While this can limit the efficiency of timestepping, collocation schemes possess convenient properties to help mitigate this cost. As the mass matrix is not created via spatial integration, the mass matrix in two or three dimensional cases is exactly the tensor product of the 1D mass matrices in each parametric direction, even when the geometric mapping in not affine. This is not the case in Galerkin schemes <cit.>. In particular, this also means that the inverse of the mass matrix is equal to the tensor product of the 1D inverses. Thus by storing a factorization of the 1D mass matrices, we can efficiently perform the linear solves required during explicit time stepping.
In particular, the application of the full inverse can be written as O(n^d-1) applications of the 1D LU factorization, where n is the number of degrees of freedom in each direction and d is the number of spatial dimensions. The LU decomposition matrices have the same bandwidth as the original mass matrices <cit.>, meaning that for the 1D matrices each of the factors has bandwidth k+1. Using this, we can estimate that the cost of a 1D mass matrix solve is O(n(k+1)) once we have the factorization. Therefore, the cost of application of the full mass inverse requires O(n^d(k+1)) operations. For completeness we show the specific forms of these operations in the 2D and 3D settings below.
For a 3D problem, we let M^-1 = M_x^-1⊗ M_y^-1⊗ M_z^-1 be the inverse of the collocation mass matrix. If we assume for simplicity that the number of basis functions is the same in each direction and equal to n, then M_x^-1∈ℝ^n × n, M_y^-1∈ℝ^n × n, and M_z^-1∈ℝ^n × n are the corresponding 1D mass matrix inverses. Algorithm <ref> details the process of applying the full mass matrix inverse to a vector 𝐱 using the 1D inverses. Note that for 2D problems this process simplifies to reshaping 𝐱 into a matrix X, computing M_x^-1 X M_y^-T, and reshaping back to a vector.
This is faster than utilizing a naïve LU factorization of a full, 2D or 3D Galerkin mass matrix at every time step, which would cost approximately O(n^d+1 (k+1)^d-1 ) due to increased bandwidth of the matrix (n^d rows with bandwidth of n(k+1)^d-1). Note that this cost can also be improved in the Galerkin setting through the use of sum factorization <cit.>, though this is much more complex for non-affine domains than the collocation scheme. Also worth mentioning is the Spectral Element Method (SEM) <cit.>, which is obtained by using a Gauss-Legendre-Lobatto numerical integration scheme and a Lagrange polynomial basis with nodes located at the quadrature points. This too can be written as a collocation scheme with the added advantage of resulting in a diagonal mass matrix. In some ways the spline collocation technique presented in this work can be thought of as the generalization of SEM to smooth, spline bases.
Finally, one can generate schemes that do not require the solution of a mass matrix system at every time step, even for non-diagonal mass matrices, using lumping techniques <cit.>. These techniques have also been applied in isogeometric collocation schemes and result in efficient schemes that can be comparable to finite-difference-time-domain methods in terms of cost <cit.>, though we do not further pursue this in this work.
Combining the results presented in this section with the previous, the total cost of the collocation schemes approximately scales like O(n^d(k+1)^d) per time step, with the asymptotic scaling being dominated by the cost of evaluation of the flux and its derivatives on the collocation points dominates the scaling, not the applications of inverse mass matrices.
§.§ A Note on Maximum Stable Time Step Sizes
In the previous sections we have focused on the operations and cost required for one explicit time step. Perhaps equally important, however, is the largest stable time step that can be taken using the scheme. Common existing high order methods using C^0 basis functions, like SEM and discontinuous Galerkin (DG), have critical time steps which scale like h / k^2, where h is the mesh size and k is the polynomial order <cit.>. Spline-based discretizations, on the other hand, have been shown to have critical time steps that scale as h/k <cit.>. Thus spline discretizations also have potential for improved efficiency through a factor of k larger time steps when compared with standard, high-order schemes.
§ STABILIZATION
For problems with smooth solutions, the collocation scheme defined above is sufficient to obtain a high-order numerical approximation. However, the presence of shocks or other sharp features, which are ubiquitous in solutions of nonlinear conservation laws, will cause large oscillations and blow-up in the numerical solution. In this section we describe the stabilization mechanisms we have applied to the collocation schemes, starting with a nonlinear shock-capturing scheme using a residual-based viscosity, followed by a linear stabilization term which smooths small-scale, high frequency oscillations, similar to upwinding in the finite difference context.
§.§ Residual-Based Viscosity
To stabilize our discretizations in the presence of shocks and other discontinuities, we adapt a residual-based viscosity stabilization similar to those presented in <cit.> to the spline collocation setting. We start by considering an arbitrary scalar conservation law before moving on to the case of the compressible Euler equations.
§.§.§ Scalar Conservation Laws
In the scalar conservation law setting, the nonlinear stabilization is added by including the Laplacian of the conserved quantity, so the statement of the semi-discretized collocation scheme becomes
{5in Given 𝐟 : ℝ→ℝ^d, g: ∂Ω× [0,t_f] →ℝ, initial condition ϕ_0 : Ω→ℝ, and final time t_f ∈ℝ, find ϕ^h ∈ S^h_T such that:
∂ϕ^h/∂ t(𝐱_i,t) + ∇·𝐟 (ϕ^h(𝐱_i,t)) = ν_art(𝐱_i, t) Δϕ^h (𝐱_i, t) ∀𝐱_i ∈Ω,
ϕ^h(𝐱_i, t) = g(𝐱_i, t) ∀𝐱_i ∈∂Ω,
ϕ^h(𝐱_i, 0) = ϕ_0(𝐱_i) ∀𝐱_i ∈Ω.
.
Note that we have omitted terms involving derivatives of the viscosity coefficient ν_art(𝐱_i, t). We see this as analogous to choosing a viscosity which is piecewise constant in each cell, commonly assumed in other high-order shock-capturing schemes <cit.>. It is perhaps true that the consistency of the scheme would be improved with the inclusion of the viscosity coefficient within the viscous flux, meaning that terms involving the derivatives of ν_art would also be included. However, this increases the complexity of the method, as one would have to include a step such as interpolation of the ν_art values, as was done in <cit.>. Moreover, we have found that the treatment of viscosity presented here has led to quality results, suggesting that these derivative terms are perhaps not necessary.
The value of the viscosity coefficient is determined via the residual of the PDE, as it is expected that the residual will be large near the sharp features, such as shocks, which cause instability. For a scalar conservation law, we define the discrete residual as
R^h(𝐱, t) = ∂ϕ^h/∂ t(𝐱, t) + ∇·𝐟 (ϕ^h(𝐱,t)).
To approximate the unsteady term in the residual, we store values of the solution at previous time steps and utilize an explicit, high-order Backward Difference Formula (BDF). We have found using the 4th-order formula sufficient for the results in this work, though in general one may want to adjust the order based on the chosen spatial and time discretization schemes. The BDF order determines how many past solutions must be stored, as well as how many initial time steps must be taken in order to generate enough history data to begin using the desired scheme. At the beginning of the simulation one can either wait to compute this residual (and thus add nonlinear stabilization) until enough history is available for the desired BDF scheme, or one can gradually increase the BDF order as data becomes available until the final order is reached. We have found that the results are not highly dependent on this choice, though the convergence rates on some smooth problems with small spatial discretization errors can be arrested by the use of low-order BDF schemes at early time steps.
Note that for a a steady problem, the collocation solution is defined such that the discrete residual is zero at the collocation points. Thus we choose a slightly modified residual to drive our viscosity at each collocation point. In particular, we define the residual
R^h(𝐱_i, t) = max_𝐱∈ M(|R^h(𝐱, t)|),
where 𝐱_i is a collocation point and M is the set of points (which are not collocation points) at which the residual is sampled for each collocation point. These possible points are selected as the centroids of each Greville cell, and for each collocation point the set M is defined as the sampling points closest to the collocation point as shown in Figure <ref>.
This residual defines a viscosity at each collocation point via
ν_RB(𝐱_i, t) = C_RB h(𝐱_i)^2 R^h(𝐱_i, t)/m(𝐱_i, t),
where C_RB is a tunable constant, h is the mesh size defined by the distance between collocation points, and m is the normalization factor <cit.>
m(𝐱_i, t) = ϕ - || ϕ(𝐱_i, t) - ϕ||_∞,
with ϕ defined as the maximum value of the solution minus the minimum value of the solution over all of the collocation points directly adjacent to 𝐱_i at the current time, including the point itself.
At each collocation point we also construct a first order viscosity of the form
ν_FO(𝐱_i, t) = C_max h(𝐱_i) c(𝐱_i, t),
where C_max is another tunable parameter and c is an estimate of the wavespeed at each collocation point |𝐟' (ϕ(𝐱_i, t))|. For each collocation point this is computed as the maximum wavespeed over the grid of width 9 collocation points in each direction centered at the current collocation point.
The final stabilization viscosity is constructed by limiting the residual-based viscosity with the first order viscosity
ν_art(𝐱_i, t) = min(ν_RB(𝐱_i, t), ν_FO(𝐱_i, t)).
Up to this point we have left all of the stabilization expressions continuous in time. Within the RK time integration schemes, we simply compute a stabilization viscosity at every collocation point using the solution information at the beginning of the time step, then use this same viscosity for all of the RK substeps.
§.§.§ The Euler Equations
In the case of the Euler equations, there is slightly more freedom in the design of stabilization schemes. One option is to design stabilization terms which are directly proportional to the Laplacian of the conserved quantities, like in the scalar case above. Indeed, this is the approach used in <cit.>, and we have found that this does indeed sufficiently stabilize spline collocation solutions. In this case, the semi-discrete form can be written as
{5in Given 𝐠 : ∂Ω× [0,t_f] →ℝ^d+2, initial conditions ρ_0 : Ω→ℝ, ρ𝐮_0 : Ω→ℝ^d, and E_0 : Ω→ℝ, and final time t_f ∈ℝ, find ρ^h ∈ S^h_T, ρ𝐮^h ∈𝐒^h_T, and E^h ∈ S^h_T such that:
∂/∂ t[ ρ^h; ρ𝐮^h; E^h ](𝐱_i, t)
+ ∇·[ ρ𝐮^h; ρ𝐮^h ⊗𝐮^h + p^h 𝐈; 𝐮^h (E^h + p^h) ](𝐱_i,t)
= ν_art(𝐱_i, t) ∇·[ ∇ρ; ∇ρ𝐮; ∇ E ](𝐱_i, t) ∀𝐱_i ∈Ω,
[ ρ^h; ρ𝐮^h; E^h ](𝐱_i, t) = 𝐠(𝐱_i, t) ∀𝐱_i ∈∂Ω,
[ ρ^h; ρ𝐮^h; E^h ](𝐱_i, 0) = [ ρ_0; ρ𝐮_0; E_0 ](𝐱_i) ∀𝐱_i ∈Ω,
p^h(𝐱_i,t) = (γ - 1) (E^h(𝐱_i,t) - 1/2ρ𝐮^h(𝐱_i,t) ·𝐮^h(𝐱_i,t)),
T^h(𝐱_i,t) = p^h(𝐱_i,t)/ρ^h(𝐱_i,t).
.
For each of the conservation of mass, momentum, and energy, we define a residual at each collocation point in the same way as for a scalar conservation law, shown in Equation (<ref>), and a viscosity based on each is constructed according to Equation (<ref>). Then, the value of ν_art at each collocation point is chosen to be the maximum of each of these viscosities, limited by the first-order viscosity. We shall refer to this method as stabilization via Laplacian fluxes.
However, as mentioned in <cit.>, this approach is not necessarily consistent with the entropy inequality. Instead, we can also choose to regularize the system using the Guermond-Popov fluxes proposed in <cit.> and used in conjunction with a residual-based viscosity in <cit.>. Let κ_art and μ_art be artificial viscosities to be determined, then the regularized system becomes
{5in Given 𝐠 : ∂Ω× [0,t_f] →ℝ^d+2, initial conditions ρ_0 : Ω→ℝ, ρ𝐮_0 : Ω→ℝ^d, and E_0 : Ω→ℝ, and final time t_f ∈ℝ, find ρ^h ∈ S^h_T, ρ𝐮^h ∈𝐒^h_T, and E^h ∈ S^h_T such that:
∂/∂ t[ ρ^h; ρ𝐮^h; E^h ](𝐱_i, t)
+ ∇·[ ρ𝐮^h; ρ𝐮^h ⊗𝐮^h + p^h 𝐈; 𝐮^h (E^h + p^h) ](𝐱_i,t)
= ∇·[ κ_art∇ρ; μ_artρ∇^s 𝐮 + κ_art∇ρ⊗𝐮; κ_art∇ E + 1/2𝐮·𝐮 (κ_art∇ρ) + μ_artρ∇^s 𝐮·𝐮 ](𝐱_i, t) ∀𝐱_i ∈Ω,
[ ρ^h; ρ𝐮^h; E^h ](𝐱_i, t) = 𝐠(𝐱_i, t) ∀𝐱_i ∈∂Ω,
[ ρ^h; ρ𝐮^h; E^h ](𝐱_i, 0) = [ ρ_0; ρ𝐮_0; E_0 ](𝐱_i) ∀𝐱_i ∈Ω,
p^h(𝐱_i,t) = (γ - 1) (E^h(𝐱_i,t) - 1/2ρ𝐮^h(𝐱_i,t) ·𝐮^h(𝐱_i,t)),
T^h(𝐱_i,t) = p^h(𝐱_i,t)/ρ^h(𝐱_i,t).
.
Here we have included the viscosities κ_art and ν_art within the derivative terms for consistency with <cit.>, but we reiterate that terms involving derivatives of κ_art and ν_art are ignored. The construction of these viscosities is done similarly to the previous cases. In particular we define
μ_art(𝐱_i, t) = min(μ_RB(𝐱_i, t), μ_FO(𝐱_i, t)),
and
κ_art(𝐱_i, t) = 𝒫/C_RBμ_art(𝐱_i, t),
similar to <cit.>, with 𝒫 representing an artificial Prandtl number, and the first order viscosity defined by the maximum wavespeed estimate of c = |𝐮| + √(γ T). Unlike <cit.>, however, we use the residuals of the governing conservation laws to determine the residual-based viscosity μ_art(𝐱_i, t). Like the previous case we determine the viscosity μ_art at every collocation point as the maximum of the viscosities computed using Equation (<ref>) for each of the mass, momentum, and energy equations.
§.§ Linear Stabilization
For the stability of the method, it is possible that the nonlinear viscosity alone is sufficient to prevent blow up of the solution. Indeed in the Galerkin finite element setting, residual-based viscosity stabilization results in a method which provably converges to the entropy solution of a scalar conservation law <cit.>. However, as noted in <cit.>, if only nonlinear stabilization is applied, the solution may still contain spurious small-scale oscillations. This was also noticed when using central finite difference stencils in <cit.>. Thus it is still desirable to include some sort of linear stabilization, akin to upwinding in the finite difference and finite volume contexts.
One option for a linear stabilization scheme is the Streamline-Upwind-Petrov-Galerkin (SUPG) method, recently adapted to the spline collocation setting in <cit.>. In this work, however, we wish to develop a novel stabilization strategy inspired by projection stabilization for Galerkin methods <cit.>. Local projection stabilization adds a stabilization term to the weak formulation of the problem which depends on the difference between the discrete gradient and its projection onto a coarser finite element space, which creates a method which removes spurious oscillations. For Galerkin methods, the resulting scheme does not require the use of space-time elements for consistent time integration, and is instead more amenable to the method of lines approach pursued here. Moreover, we note that residual-based linear stabilization schemes like SUPG can exhibit instabilities when small time steps are employed <cit.>, which we anticipate due to our use of explicit Runge-Kutta discretizations in time.
Rather than project our discrete gradient onto a coarser function space, we make use of interpolation operations which we have shown previously can be performed in an efficient manner. Let the space S^h, k-1 be the B-spline space with the same number of elements as the solution space S^h, but with degree k-1 instead of k. One possible approach to stabilization is to interpolate the gradient of the solution using a function in S^h, k-1 to define a coarsened gradient used in the stabilization. However, we note that in 1D, the derivative of a B-spline function is itself a B-spline of one lower degree. Thus this coarsened gradient interpolation would be identical to the solution gradient, and the stabilization procedure would have no effect.
To circumvent this issue, we start by interpolating the gradient of the solution using S^h and its corresponding Greville points, and denote the result as ∇ϕ^h∈ S^h. We then use the values of ∇ϕ^h at the Greville points of S^h, k-1 to define another interpolation of the gradient, which we denote as Π∇ϕ^h∈ S^h, k-1. This function is used in a manner similar to the projected gradient in projection stabilization. Specifically, we evaluate its derivative at the Greville points of S^h, and another stabilization term is added to the statement of the collocation scheme of the form ν_lin(𝐱_i, t) ∇· ( ∇ϕ^h (𝐱_i, t) - Π∇ϕ^h (𝐱_i, t) ), where the viscosity coefficient ν_lin is defined at each collocation point by
ν_lin(𝐱_i, t) = C_lin h(𝐱_i) c(𝐱_i, t).
Here C_lin is another tunable parameter and c(𝐱_i) is the local wavespeed as described previously. As a final remark, we note that we have also tested other designs for this linear stabilization term, for example using a spline space with half the number of elements and the same degree as S^h as the coarse space, instead of S^h,k-1. However, we have found that many of these schemes are overly dissipative on smooth solutions and thus the errors of the stabilized collocation scheme increase compared to the unstabilized scheme in these cases.
For completeness, we conclude by stating the fully stabilized semi-discrete form of a scalar conservation law, given by
{5in Given 𝐟 : ℝ→ℝ^d, g: ∂Ω× [0,t_f] →ℝ, initial condition ϕ_0 : Ω→ℝ, and final time t_f ∈ℝ, find ϕ^h ∈ S^h_T such that:
∂ϕ^h/∂ t(𝐱_i, t) + ∇·𝐟 (ϕ^h(𝐱_i,t)) = ν_art(𝐱_i, t) Δϕ^h (𝐱_i, t)
+ ν_lin(𝐱_i, t) ∇· ( ∇ϕ^h (𝐱_i, t) - Π∇ϕ^h (𝐱_i, t) ) ∀𝐱_i ∈Ω,
ϕ^h(𝐱_i, t) = g(𝐱_i, t) ∀𝐱_i ∈∂Ω,
ϕ^h(𝐱_i, 0) = ϕ_0(𝐱_i) ∀𝐱_i ∈Ω.
.
For the Euler equations, the linear stabilization method is applied to the conserved quantity corresponding to each conservation law. For brevity we only list the semi-discrete form associated with the Guermond-Popov fluxes below
{5in Given 𝐠 : ∂Ω× [0,t_f] →ℝ^d+2, initial conditions ρ_0 : Ω→ℝ, ρ𝐮_0 : Ω→ℝ^d, and E_0 : Ω→ℝ, and final time t_f ∈ℝ, find ρ^h ∈ S^h_T, ρ𝐮^h ∈𝐒^h_T, and E^h ∈ S^h_T such that:
∂/∂ t[ ρ^h; ρ𝐮^h; E^h ](𝐱_i, t)
+ ∇·[ ρ𝐮^h; ρ𝐮^h ⊗𝐮^h + p^h 𝐈; 𝐮^h (E^h + p^h) ](𝐱_i,t)
= ∇·[ κ_art∇ρ; μ_artρ∇^s 𝐮 + κ_art∇ρ⊗𝐮; κ_art∇ E + 1/2𝐮·𝐮 (κ_art∇ρ) + μ_artρ∇^s 𝐮·𝐮 ](𝐱_i, t)
+ ν_lin(𝐱_i,t) ∇·[ ∇ρ^h - Π∇ρ^h; ∇ρ𝐮^h - Π∇ρ𝐮^h; ∇ E^h - Π∇ E^h ](𝐱_i,t) ∀𝐱_i ∈Ω,
[ ρ^h; ρ𝐮^h; E^h ](𝐱_i, t) = 𝐠(𝐱_i, t) ∀𝐱_i ∈∂Ω,
[ ρ^h; ρ𝐮^h; E^h ](𝐱_i, 0) = [ ρ_0; ρ𝐮_0; E_0 ](𝐱_i) ∀𝐱_i ∈Ω,
p^h(𝐱_i,t) = (γ - 1) (E^h(𝐱_i,t) - 1/2ρ𝐮^h(𝐱_i,t) ·𝐮^h(𝐱_i,t)),
T^h(𝐱_i,t) = p^h(𝐱_i,t)/ρ^h(𝐱_i,t).
.
§ RESULTS
With a full description of the stabilized collocation schemes complete, we now move on to consider the results of numerical tests on a variety of conservation laws. We start by considering the linear advection and Burgers equations, which demonstrate the effectiveness of the stabilization strategies in the presence of shocks while maintaining high-order accuracy on problems with smooth solutions. We then move on to consider solutions of the Buckley-Leverett equations, which show that accurate solutions can be obtained even for scalar conservation laws defined by non-convex fluxes. Finally, we turn to the compressible Euler equations to show the promise of the method for high-order CFD applications. In all cases we perform time integration using the standard RK4 method. In addition, results are obtained with both linear and nonlinear stabilization terms included, unless otherwise stated in the text.
§.§ Scalar Advection
The simplest scalar conservation law is the linear advection equation, given by
∂ϕ/∂ t + ∇· (𝐚ϕ) = 0.
Here 𝐚 is the advection velocity field and ϕ is the conserved quantity. In this case the solution is simply transported according to the velocity 𝐚, meaning that, while discontinuities may be present in the initial condition, shocks will not form from smooth solutions in finite time. For all of the cases in this section we define C_RB = 4, C_max = 0.5, and C_lin = 0.25.
§.§.§ Smooth Solution
To start, we consider a scenario with a smooth solution in order to assess the effect of the stabilization strategies on errors produced by the collocation schemes. We define the initial condition as ϕ(𝐱, 0) = sin(2 π x) sin(2 π y) over the domain 𝐱∈ [0,1]^2, a constant advection velocity of 𝐚 = (1,1), and use periodic boundary conditions in each direction. From this setup it is clear that the exact solution at t_f = 1 is the same as the initial condition, ϕ(𝐱, 0) = ϕ(𝐱, 1), and we compute the error in the numerical solution at this time compared to the exact solution.
Figure <ref> shows the results of this study using a time step of 1 × 10^-4, selected experimentally to be small enough such that the time discretization errors are small compared to the spatial discretization errors for all of the meshes tested. In Figure <ref> we show the L^2 errors obtained with standard, unstabilized collocation as well as with only linear stabilization included. Clearly the expected convergence rates of k+1 for odd k and k for even k are recovered using both schemes and the inclusion of linear stabilization has not noticeably increased the magnitude of the errors in this smooth case.
Figure <ref> shows the L^2 errors obtained with linear and nonlinear stabilization active compared to unstabilized collocation. As the mesh is refined, the errors obtained with the stabilized scheme do match the unstabilized results and the proper convergence rates are obtained. However, on the coarsest mesh tested we see that the inclusion of nonlinear stabilization has dramatically increased the error. On this coarse mesh the solution is treated as underresolved by the stabilization and thus the nonlinear viscosity is active everywhere and behaves like a first-order viscosity term. After time integrating for one full period the resulting damping has destroyed the structure of the numerical solution, resulting in the large errors.
§.§.§ Non-Smooth Solution
Next we consider the transport of a non-smooth solution profile. While the previous test case demonstrated the ability of the stabilization to properly vanish in smooth regions where the solution is well resolved, this test will demonstrate the effectiveness of the stabilization near shocks. The setup is the same as in the previous case, only now we define the initial condition as
ϕ(𝐱, 0) =
1 𝐱∈ (0.3, 0.7)^2
0 .
Figure <ref> shows the L^1 and L^2 errors in the solution after advancing in time for one period. As the solution in this case contains a shock, the optimal theoretical convergence rates are 1 in the L^1 norm and 1/2 in the L^2 norm. Our stabilized collocation scheme recovers close to these optimal rates with all degrees k, with a small increase in rate and decrease in error magnitude as k increases.
Moreover, we can also demonstrate the superior performance of the residual-based viscosity when compared to a first-order viscosity using this example. Figure <ref> shows the final solution using 128^2 Bézier elements and k = 5 with a first-order viscosity (obtained by overwriting the residual-based viscosity with the limiting first-order value at all locations) and our residual-based stabilization scheme. As expected, the residual-based scheme recovers a much more accurate and sharp solution profile than the first-order viscosity scheme.
Finally, this same test case also demonstrates the effect of the linear stabilization technique when coupled with the residual-based viscosity. Figure <ref> shows contour plots of the solutions obtained with and without linear stabilization using 128 elements in each direction and k=5, as well as the corresponding residual-based viscosity fields. These plots of the solution include 30 equispaced contours in the range [-0.01, 0.01] and 30 equispaced contours in [0.99, 1.01]. While the collocation scheme is certainly stable without linear stabilization, we see that there are many small oscillations that travel away from the shock locations. Moreover we see that the residual-based viscosity becomes active in the oscillatory regions, but it is not effective at removing them. With linear stabilization included, however, we see that these spurious oscillations are effectively removed and the residual-based viscosity is much more focused in the shock regions.
§.§ Burgers Equation
Next we consider the inviscid Burgers equation, written in conservative form as
∂ϕ/∂ t + ∇·(ϕ^2/2𝐯) = 0,
with 𝐯 = 1 in 1D and 𝐯 = (1,1) in 2D. Here the unknown ϕ represents the advection speed in the 𝐯 direction, making the problem nonlinear and creating the possibility of shock formation in finite time. Similar to linear advection, we define the constants C_RB = 4, C_max = 0.5, and C_lin = 0.25 for all of the examples in this section.
§.§.§ Smooth Solution
As in the linear advection case we start by considering a problem with a smooth analytical solution to verify convergence of the stabilized methods. We consider a 1D problem on the interval x ∈ [0,1] with initial condition given by ϕ(x, 0) = e^x - 1. We specify the exact solution as a Dirichlet boundary condition at x = 0 and do not specify anything at the outflow, x = 1. We advance in time using a time step of size 5 × 10^-5 until a final time of t = 0.01, where the exact solution is computed via the method of characteristics.
Figure <ref> details the L^2 errors obtained with unstabilized collocation, collocation with only linear stabilization, and collocation with linear and nonlinear stabilization. Like the scalar advection case we recover the expected convergence rates with the stabilized schemes and the errors are the same as those obtained without stabilization. We also see the same coarse mesh effect where the nonlinear stabilization becomes active and increases the error, but again this is limited to very coarse resolution simulations.
§.§.§ 1D Riemann Problem
We now move on to solving a 1D Riemann problem, namely a moving shock. The initial condition is defined over the interval x ∈ [0,1] as
ϕ(x, 0) =
1 x < 1/3
0 .
This shock travels to the right at a constant speed, and at the final time of t_f = 0.2 is located at x = 13/30. For this example we select a time step of 1 × 10^-5.
Figure <ref> shows the convergence in the L^2 and L^1 norms for the stabilized collocation schemes. Once again, they recover close to the optimal rates of 0.5 and 1, respectively. We also see some interesting behavior where, especially in the L^2 error, as the mesh is refined the rate of convergence alternates between a higher and lower value. Moreover, it seems that the odd k results are similar to one another, while the even k are also similar but shifted by one refinement level compared to the odd results. The errors are comparable in magnitude to those obtained by residual-based viscosity SBP finite difference methods <cit.>.
For a more qualitative analysis we can also plot the solutions and viscosity fields for a variety of meshes and polynomial degrees. Figure <ref> shows the solution and residual-based viscosity fields obtained using k=5 and a various numbers of elements, while Figure <ref> details the results obtained using 256 elements and a variety of polynomial degrees. The solutions behave as expected, with the shocks in simulations run with fewer elements being spread over a larger region, and variations in polynomial order having little effect. Figure <ref> shows a zoomed view of these solutions at the top of the shock location, illustrating the level of monotonicity achieved by the solutions. Increasing the number of elements results in the oscillations being confined to a region closer to the shock, while degree elevation again produces little change in the solution. The location and magnitude of the pre-shock oscillations are very similar to those seen in <cit.> using upwind SBP finite difference schemes for the same problem.
§.§.§ 2D Riemann Problem
Another common test case for Burgers equation is the two-dimensional Riemann problem defined on 𝐱∈ [0,1]^2 with the initial condition
ϕ(𝐱, 0) =
0.5 x < 1/2 y < 1/2
-0.2 x < 1/2 y > 1/2
0.8 x > 1/2 y < 1/2
-1 x > 1/2 y > 1/2 .
For our simulations we extend the domain to 𝐱∈ [0,2]^2, symmetrically extend the initial data about x = 1 and y = 1, and use periodic boundary conditions. The simulations are run using a time step of 2 × 10^-4 to a final time of t_f = 0.5, where the solution is compared against the exact solution found in <cit.>.
Figure <ref> shows a sample solution at the final time computed using 128^2 elements and k = 6 in the domain 𝐱∈ [0,1]^2 and the corresponding residual-based viscosity field. Figure <ref> shows the convergence of the L^2 and L^1 errors as the mesh is refined. The numerical solution and resulting errors match well with those produced in <cit.> using a variety of residual-based viscosity stabilized discretizations, and the residual-based viscosity is properly focused in the shock region.
§.§ Buckley-Leverett Equations
In contrast to the previous examples, conservation laws defined by non-convex flux functions can challenge numerical schemes due to the composite wave nature of their solutions, resulting in connected shocks and rarefactions. Many standard finite volume techniques can fail to converge to the correct entropy solution in these cases <cit.>. The Buckely-Leverett equations can lead to one such type of conservation law. For a full description of these equations, we refer to <cit.>. These equations describe the the flow of 2 immiscible fluids (historically oil and water) within a porous media, and are given by an elliptic equation describing the pore pressure, Darcy's law relating pressure to fluid velocity, and a hyperbolic equation relating the saturation of each phase to the fluid velocity.
A common technique to solve these equations is known as Implicit Pressure Explicit Saturation (IMPES) <cit.>, where the pressure (and thus velocity) is computed via an implicit solver and the saturation is updated via an explicit scheme. If we assume there are no sources in the domain and the relevant material properties are all set to 1 as in <cit.>, the 1D conservation law to be solved for saturation is given by
∂ϕ/∂ t + ∂/∂ x(ϕ^2/ϕ^2 + (1-ϕ)^2) = 0,
where ϕ is the water saturation field (or the saturation field of the equivalent phase).
An even more complex conservation law results when gravitational forces are included in the formulation. For a 2D problem with gravity acting in the negative y direction and the same set of parameters as <cit.>, the conservation law defining saturation is given by
∂ϕ/∂ t + ∇·𝐟 = 0,
where 𝐟 = ( ϕ^2/ϕ^2 + (1-ϕ)^2, ϕ^2 (1-5(1-ϕ)^2)/ϕ^2 + (1-ϕ)^2). For all of the examples in this section we set C_RB = 4, C_max = 0.25, and C_lin = 0.25. While the results were stable with C_max = 0.5, we found a reduction necessary to achieve optimal convergence rates for the 1D Riemann problem described next.
§.§.§ 1D Riemann Problem
For this conservation law we focus on results for non-smooth problems only, starting with a standard 1D Riemann problem. The domain is defined as x ∈ [-1, 1] with initial data
ϕ(x, 0) =
1 x < 0
0 .
The solution in this case exhibits a compound structure of both a rarefaction and shock moving to the right. We simulate until a final time of t_f = 0.25 using a time step of 5 × 10^-5 and compare the results with the exact solution computed using the method of Osher <cit.>.
Figure <ref> details the convergence rates of the L^2 and L^1 errors of the numerical solution at the final time. Once again rates that are close to optimal are recovered in both norms. We also begin to observe some split behavior for the odd and even polynomial degrees as the mesh is refined, like in the previous tests. Figure <ref> shows the solutions obtained using k = 5 and varying numbers of elements, as well as the corresponding residual-based viscosities. The rarefaction-shock structure is appropriately captured and the viscosity is focused on the shock, though there are some small oscillations that appear where the shock and rarefaction meet.
§.§.§ 2D Problem with Gravity
Next we consider the 2D problem with gravity with initial condition defined by
ϕ(𝐱, 0) =
1 x^2 + y^2 < 1/2
0 .
on the domain 𝐱∈ [-1.5, 1.5]^2. There is no exact solution to this problem, but we show a sample numerical solution computed at t_f = 0.5 with 256^2 elements, k=5, and a time step of 1 × 10^-4 in Figure <ref>. The solution agrees well with the ones presented in <cit.> using a limited finite volume technique and entropy viscosity stabilized finite elements. The 2D composite wave structure is well captured by the residual-based viscosity and correctly depicts the variation in strength of the shock. We do note, however, that there is still some oscillatory content in the solution, particularly near the strongest shock location at the top of the high ϕ region. This can be removed by increasing C_RB or C_max to further smooth the solution.
§.§ Euler Equations
Our final set of results is obtained for the Euler equations, in which we also show the different effects produced by the Laplacian and Guermond-Popov flux regularizations. For the Laplacian flues, we set C_RB = 4, C_max = 0.1, and C_lin = 0.25. When using the Guermond-Popov fluxes, we have found a little more variation necessary to achieve optimal results. For the 1D Euler equation examples we set C_RB = 4, C_max = 0.2, and 𝒫 = 0.5 while for the 2D case we have found setting C_RB = 4, C_max = 0.1, and 𝒫 = 1 produces a slightly higher quality result. A similar change was also made in <cit.> for this 2D test case. In both one and two dimensions we use C_lin = 0.25.
§.§.§ Isentropic Flow
We begin by considering a problem with a smooth solution for one final verification that the stabilization terms do not adversely affect the accuracy of the scheme. The computational domain is x ∈ [-1, 1] with initial conditions
ρ(x, 0) = 1 + 0.9sin(π x)
ρ u(x, 0) = 0
E(x, 0) = ρ^γ/γ-1.
We set γ = 3 and enforce periodic boundary conditions. We simulate to a final time of t_f = 0.1 using a time step of size 5 × 10^-5, and the resulting flow should be isentropic. Thus we should recover the expected convergence rates for smooth problems with the stabilized collocation schemes.
For brevity, we only present results obtained using Guermond-Popov fluxes, the results obtained with the Laplacian regularization are very similar. Figure <ref> details the L^2 errors obtained for each of the conserved quantities with unstabilized collocation, collocation with only linear stabilization, and collocation with linear and nonlinear stabilization. These results behave much like the corresponding linear advection and Burgers results. For all three conserved quantities the expected rates of k and k+1 for even and odd degrees are recovered, and only on the coarsest meshes does the residual-based viscosity noticeably increase the magnitude of the errors.
§.§.§ Sod Shock Tube
Next we consider the standard Sod shock tube Riemann problem, with initial conditions given by
ρ(x, 0) =
1 x < 1/2
0.125 .
ρ u(x, 0) = 0
E(x, 0) =
1/γ -1 x < 1/2
0.1/γ - 1 .
over the domain x ∈ [0,1] with γ = 1.4. We integrate in time until t_f = 0.25 using a time step of size 1 × 10^-4, during which the initial discontinuity separates into a rarefaction, contact wave, and shock according to the characteristics of the system.
The L^2 and L^1 errors in the conserved quantities compared to the exact solution at the final time are shown in Figure <ref> for Laplacian flux regularization and Figure <ref> for Guermond-Popov regularization. In all cases rates close the optimal values of 0.5 and 1 are obtained. We also see lower errors for even values of k in all of the cases, especially in the L^2 errors of the conserved quantities. Finally, the magnitude of the errors obtained match with those obtained in <cit.> using finite elements stabilized using an entropy viscosity and the Guermond-Popov fluxes as well as the results obtained in <cit.> using RBF-FD methods stabilized with a residual-based viscosity and the Laplacian fluxes.
For a more qualitative comparison, Figure <ref> shows the density, velocity, pressure, and residual-based viscosity solutions for k = 5 and varying numbers of elements using the Laplacian fluxes, while Figure <ref> details the same results obtained using the Guermond-Popov regularization. The results match the exact solutions well in both cases, with the Laplacian flux results perhaps being slightly more diffuse near the contact. We also note that the residual-based viscosity is not active at the contact discontinuity, only at the shock location.
§.§.§ Shu-Osher Shock Tube
Next we consider the Shu-Osher shock tube, which models the interaction of a Mach 3 shock with a sinusoidal density field <cit.>. The computational domain is x ∈ [0,10] and the initial condition is
ρ(x, 0) =
3.857 x < 1
1+0.2sin(5x) ,
u(x, 0) =
2.629 x < 1
0 ,
p(x, 0) =
10.333 x < 1
1 .
The solution to this problem is characterized by the creation of high frequency density waves as the initial waves interact with the shock. This makes it a challenging test problem for numerical schemes, as the scheme must be dissipative enough to capture the shock without adding so much dissipation that the structure of the solution is destroyed. We simulate using a time step of 2 × 10^-5 to a final time of t_f = 1.8.
Figures <ref> and <ref> show the density fields obtained using each of the regularizations, with k = 4 and 2 different resolutions, as well as a reference solution obtained using a fifth order WENO-JS scheme on a 2000 cell grid. In this case there are much more pronounced differences between the Laplacian and Guermond-Popov results. The density field obtained using the Laplacian fluxes is much more damped than the Guermond-Popov results at the same resolutions. The Laplacian results look similar to the standard WENO results in <cit.>, which is known to be very diffusive. The Laplacian results can be sightly improved by weakening the nonlinear stabilization, however we have found that the improvement is minimal. Meanwhile, weakening the stabilization can reveal strong over and undershoots at the shock location.
The high frequency density oscillations in the 400 element, Guermond-Popov solution are almost identical to the reference solution. In this region the 200 element solution performs about as well as state of the art TENO schemes designed to minimize numerical disspation in cases such as this one with the same number of elements <cit.>. We do note, however, that there are small spurious oscillations which develop near the shocklets further upstream. In these regions the residual-based viscosity is still small, and suggests that perhaps further improvements to this stabilization mechanism may be warranted.
This test case also produces interesting results about the role of the linear stabilization in the collocation scheme. Figure <ref> shows the zoomed-in density field as well as the full residual-based viscosity field for the Guermond-Popov scheme with a variety of linear stabilization strengths, including when it is removed completely. We can see the high frequency oscillations superimposed onto the density solution upwind of the shock, especially where the solution should be constant. We also see the residual-based viscosity activated over most of the domain, though it is ineffective at removing these oscillations, like in the linear advection case. Because of this, in the region of large density oscillations just after the shock we see that the results obtained without linear stabilization are actually slightly more dissipative (in terms of the larger structures) than results obtained with linear stabilization. Finally, we note that the results obtained with C_lin = 0.1 and C_lin = 0.25 are almost identical to one another.
§.§.§ 2D Riemann Problem: Case 12
As a final test case, we consider the 2D Riemann problem titled Case 12 from <cit.>. The domain of interest is 𝐱∈ [0,1]^2, with initial condition given by
ρ(𝐱, 0) =
4/5 x < 1/2 y < 1/2
1 x < 1/2 y ≥ 1/2
1 x ≥ 1/2 y < 1/2
17/32 ,
𝐮(𝐱, 0) =
(0,0) x < 1/2 and y < 1/2
(3/√(17),0) x < 1/2 y ≥ 1/2
(0,3/√(17)) x ≥ 1/2 y < 1/2
(0,0) ,
p(𝐱, 0) =
1 x < 1/2 y < 1/2
1 x < 1/2 y ≥ 1/2
1 x ≥ 1/2 y < 1/2
2/5 .
Like the 2D Burgers example, we extend the domain to 𝐱∈ [0,2]^2, reflect the initial conditions such that the problem is symmetric, and use periodic boundary conditions. The simulation is advanced using a time step of 2 × 10^-4 to a final time of t_f = 0.25 so that we may compare the final solution against a number of published results.
The left of Figures <ref> and <ref> show the solutions obtained with 400^2 elements over the domain 𝐱∈ [0,1]^2 and k = 5, where the contours represent density, the arrows represent the velocity field, and the colormap represents the pressure, as in <cit.>. Note that this yields roughly the same number of degrees of freedom as the finite difference simulations in <cit.>, the finite element solution in <cit.>, and the RBF-FD solution in <cit.>. The right of Figures <ref> and <ref> show the resulting residual-based viscosity fields. Both solutions agree very well with the finite difference solutions of <cit.>. Moreover, the density solutions are much less noisy and oscillatory than the solutions presented in <cit.>, where the former was computed using an RBF-FD discretization and the Laplacian fluxes, while the latter was computed using standard finite elements and the Guermond-Popov regularization.
The density peaks obtained with the Guermond-Popov fluxes are slightly sharper than those from the Laplacian solution, and these match well with those presented in <cit.> generated using TENO schemes on a refined mesh of 1024^2 cells. Consistent with the notion that the Guermond-Popov scheme is less dissipative, we do see some oscillations in the density field, especially near the stationary contact lines. These are much weaker in the Laplacian results. Within the Guermond-Popov scheme, these oscillations could be removed by increasing the viscosity, or perhaps using a definition of viscosity which activates more strongly in contacts, like the one used in <cit.>.
§ CONCLUSIONS
In this work we have shown a method to construct spline collocation methods suitable for the simulation of hyperbolic conservation laws. By adding a residual-based shock capturing scheme along with a linear stabilization inspired by projection stabilization in FEA, our schemes are robust in the presence of shocks while maintaining the high-order approximation power of spline-based methods in the absence of discontinuities. Due to the nature of collocation schemes, these methods also have the potential for extremely efficient explicit time integration, due to the lack of any required numerical spatial integrations. Results obtained for a variety of conservation laws show the promise of the method as a simulation tool for compressible CFD.
We believe this work opens up many interesting avenues for future research. One such interesting topic is the assessment of the effectiveness of different types of nonlinear stabilization techniques. While we have presented preliminary techniques in this work which seem to be effective, we do not claim that this is the only way one could achieve stable collocation schemes. Indeed we have seen in this work that different regularizations of the Euler equations can lead to different results. Different sensors which drive the stabilizing viscosity field could also be constructed, for example a WENO reconstruction sensor could be used <cit.>. In addition, adapting other schemes such as positivity preserving limiters <cit.> to the collocation setting is also a very attractive direction. We are also interested in the application of this method to the compressible Navier-Stokes equations, where the scheme's effectiveness for efficient, scale-resolving simulations of turbulence could be studied in more detail. The stabilized scheme may also be able to function as an implicit Large Eddy Simulation technique, similar to <cit.>. Finally, while we have focused on simple domains in this work, results on more complicated domains could be obtained using, for example, NURBS basis functions and the isoparametric concept, or immersed collocation techniques which have recently been developed <cit.>.
|
http://arxiv.org/abs/2307.05653v1 | 20230711141445 | A Quasi Time-Reversible scheme based on density matrix extrapolation on the Grassmann manifold for Born-Oppenheimer Molecular Dynamics | [
"Federica Pes",
"Ètienne Polack",
"Patrizia Mazzeo",
"Geneviève Dusson",
"Benjamin Stamm",
"Filippo Lipparini"
] | physics.chem-ph | [
"physics.chem-ph",
"cond-mat.soft"
] |
APS/123-QED
Dipartimento di Chimica e Chimica Industriale, Università di Pisa,
Via G. Moruzzi 13, 56124 Pisa, Italy
CERMICS, École des Ponts and Inria Paris, 6 & 8 avenue Blaise Pascal, 77455 Marne-la-Vallé, France
Dipartimento di Chimica e Chimica Industriale, Università di Pisa,
Via G. Moruzzi 13, 56124 Pisa, Italy
Laboratoire de Mathématiques de Besançon, UMR CNRS 6623,
Université de Franche-Comté, 16 route de Gray, 25030 Besançon, France
Institute of Applied Analysis and Numerical Simulation, University of Stuttgart, 70569 Stuttgart, Germany
[email protected]
Dipartimento di Chimica e Chimica Industriale, Università di Pisa,
Via G. Moruzzi 13, 56124 Pisa, Italy
This article proposes a so-called Quasi Time-Reversible (QTR G-Ext) scheme based on Grassmann extrapolation of density matrices for an accurate calculation of initial guesses in Born-Oppenheimer Molecular Dynamics simulations.
The method shows excellent results on four large molecular systems, ranging from 21 to 94 atoms simulated with Kohn-Sham density functional theory surrounded with a classical environment with 6k to 16k atoms.
Namely, it clearly reduces the number of self-consistent field iterations, while keeping a similar energy drift as in the extended Lagrangian Born-Oppenheimer method.
A Quasi Time-Reversible scheme based on density matrix extrapolation on the Grassmann manifold for Born-Oppenheimer Molecular Dynamics
Filippo Lipparini
August 12, 2023
======================================================================================================================================
Ab-initio, Born-Oppenheimer molecular dynamics (BOMD) is a very powerful and versatile tool to simulate molecular processes where the quantum nature of the system is not negligible.
Unfortunately, this comes at a high computational price, which stems from the necessity of solving the quantum mechanical (QM) equations, typically Kohn-Sham Density Functional Theory (KS-DFT) equations, to compute the energy and forces at every time-step.
Such equations are nonlinear and are solved using a fixed-point iterative method known as Self-Consistent Field <cit.> (SCF). BOMD simulations, that require one to perform tens of thousands of SCF calculations, rely thus heavily on extrapolation techniques <cit.> that, by using converged solutions from previous iterations, compute an accurate guess for the SCF, limiting thus the number of iterations required to achieve convergence.
A significant contribution to this field was given by Niklasson and co-workers in 2006, with their work on the time-reversible extrapolation for Born-Oppenheimer Molecular Dynamics (BOMD) <cit.>.
The core concept involves generating a guess density matrix by combining the density matrices from previous steps in a symmetric and time-reversible manner. However, numerical applications showed that enforcing an exact time-reversibility can lead to errors accumulating in long-time simulations,
spoiling thus the convergence properties of the algorithm in the long run. This led to the development of the Extended Lagrangian Born-Oppenheimer approach (XLBO) in 2008 <cit.>.
In this particular case, the time-reversible extrapolation is augmented by the inclusion of a dissipative term, which serves to reduce numerical fluctuations.
In the last few years, the XLBO method has also been proposed in a SCF-less formulation, <cit.> where the density computed using the XLBO procedure is used directly without further SCF iterations.
In Niklasson's XLBO scheme, the guess density is propagated in time subject to a potential that forces it to be close to the converged density. However, the guess density obtained with XLBO is not exactly idempotent <cit.>, which is in practice a problem that can be either ignored, or easily addressed by, for instance, McWeeny purification <cit.>.
Recently, we proposed a different strategy to compute a guess density using linear extrapolation. This is non-trivial, because in general a linear combination of density matrices does not preserve idempotency or, in other words, density matrices belong to a differentiable manifold called Grassmann manifold and not to a vector space.
Our approach uses tools from differential geometry to map the Grassmann manifold onto its tangent space, which is a vector space. It then performs a linear extrapolation on the tangent space, and then maps back the extrapolated density to the manifold. We named such a method Grassmann extrapolation (G-Ext) <cit.>.
G-Ext has been successfully adopted in the Pisa-group for both ground- and excited-state SCF-based BOMD simulations in a polarizable multiscale framework <cit.>.
While G-Ext is very effective, indeed outperforming XLBO in terms of the average number of SCF iterations required to achieve convergence along a MD trajectory <cit.>, numerical experiments have shown that the extrapolation introduces a bias causing a drift in the total energy for NVE simulations.
Such an energy drift is modest (few kcal/mol in 10 ps), but non negligible, at least for not-too-tightly-converged calculations <cit.>. In this contribution, we address such a limitation by introducing a new strategy to perform the extrapolation, that we name Quasi Time-Reversible Grassmann extrapolation method (QTR G-Ext).
This approach leverages the principles of differential geometry, similarly to the previous method, but offers enhanced accuracy and speed in extrapolating the density matrix during a BOMD simulation, as well as excellent energy conservation properties.
Given a 𝒩-dimensional basis, the SCF solves the following nonlinear eigenvalue problem which consists to find a matrix C and a diagonal matrix E such that
F(D)C=SCE
C^TSC=I_N
D=CC^T,
where C∈^𝒩× N contains the 𝒩 coefficients of the N occupied molecular orbitals, D∈^𝒩×𝒩 is the density matrix, E∈^N× N is a diagonal matrix which entries are the energy levels, F denotes the DFT operator, S∈^𝒩×𝒩 is the overlap matrix, and I_N denotes the identity matrix of order N.
We assume that the density matrix is orthogonal. In any case, it can be transformed into such matrix by considering the Löwdin factorization of the overlap matrix S and consequently the modified coefficient matrix C = S^1/2C. Then the normalized density matrix D = CC^T = S^1/2DS^1/2 belongs to the manifold
(N,𝒩) = { D∈^𝒩×𝒩 | D^2=D=D^T, (D)=N },
which is isomorphic to the so-called “Grassmann manifold”, therefore we identify by this name.
From now on, we assume that the density matrix has been orthonormalized and we denote it by D.
Since is a differential manifold, given a point D_0∈, there exists a tangent space _D_0⊂^𝒩× N, such that tangent vectors Γ(D)∈_D_0 can be associated to nearby points D∈.
In MD, t→R(t) represents the trajectory of the nuclei.
The transformation of the electronic structure can be interpreted as a trajectory denoted by t→ D_R(t) on the manifold.
In order not to burden the notation, we simply indicate D in place of D_R(t).
The objective is to determine a suitable approximation for the density matrix at the next step of the molecular dynamics trajectory by extrapolating the densities from previous steps.
Since the tangent space _D_0 is a vector space, we approximate the density matrix on _D_0.
In order to solve the extrapolation problem, we decompose the mapping R→ D as a composition of several maps
^3M ⟶𝒟⟶𝒯_D_0⟶
R ⟼ d ⟼Γ ⟼ D,
where
the first function R↦ d is a map from atomic positions to molecular descriptors. Here, as a descriptor, we use the Coulomb matrix <cit.> d ∈ℝ^N_ QM× N_ QM,
(d)_ij =
0.5z_i^2.4 i=j,
z_iz_jR(t_i)-R(t_j) i≠ j,
where N_ QM is the number of atoms treated quantum mechanically and z_i denotes the nuclear charge of the ith atom. Note that other descriptors can also be considered.
We will detail the crucial mapping d ↦Γ below.
The mapping Γ↦(Γ)=D is the so-called Grassmann exponential which maps tangent vectors on _D_0 to , and it is a locally bijective function in a neighborhood of D_0. Its inverse D↦(D)=Γ(D) is the Grassmann logarithm. These mappings are computed by means of the singular value decomposition (SVD). For mathematical details, the interested reader is referred to <cit.>.
In our method, during the MD, we use a fixed reference point D_0 to construct the tangent space _D_0.
Let n be the current time step of the MD.
Given previous q snapshots Γ_n-i=(D_n-i), for i=1,…,q, the approximation of the density matrix on the tangent space is written as
Γ_n = -Γ_n-q +
∑_i=1^qα_i ( Γ_n-i+Γ_n-q+i),
where q = q/2 if q is even, while q = (q-1)/2 if q is odd.
We remark that if in Eq. (<ref>), the term Γ_n-q is substituted by Γ_n-q, a “fully” time-reversible approach (instead of quasi time-reversible) is obtained. Numerical experiments with the fully time-reversible approach, that are reported in the Supporting Information (SI), showed good behavior for total energy conservation, but unfortunately a strong increase in the number of performed SCF iterations.
The descriptors are involved in the computation of the coefficients =[α_1,…,α_q]^T appearing in Eq. (<ref>).
Indeed, they are computed by solving the least-squares problem with Tikhonov regularization
min_∈^q{ d_n + d_n-q -
∑_i=1^qα_i ( d_n-i+d_n-q+i)
^2 + ε^2 ^2 },
where · denotes the ℓ^2-norm and ε>0 is the regularization parameter. Since the Coulomb matrix (<ref>) is symmetric, in the above formula d_j represents the vectorized Coulomb matrix considering the lower triangle. In matrix form, it corresponds to solving the following least-squares problem
min_∈^q[ d; 0 ] -
[ D; ε I_q ]^2,
where the vector d = d_n + d_n-q is padded with q zeroes,
D∈^N_d×q is the matrix which columns are defined as D_·,i=d_n-i+d_n-q+i, and I_q is the identity matrix of order q.
Then the initial guess for the density matrix is obtained as the composition of the three maps in (<ref>), where the second map d ↦Γ is given by (<ref>). Note that if this second map denoted by f was linear, then the guess would be close to exact, namely
Γ_n = f(d_n) ≈
f( - d_n-q +
∑_i=1^qα_i ( d_n-i+d_n-q+i) )
= - f(d_n-q) +
∑_i=1^qα_i [ f(d_n-i) + f(d_n-q+i) ]
= -Γ_n-q +
∑_i=1^qα_i ( Γ_n-i+Γ_n-q+i) = Γ_n.
The number q of density matrices taken at previous steps and the value of the regularization parameter ε are chosen in a heuristic manner: we computed the error Γ_n - Γ_n for different values of q and ε, specifically q=3,4,…,20 and ε = 0.001, 0.002, 0.005, 0.01, 0.02, 0.05, and we selected the combination (q,ε) corresponding to the minimal error.
When the SCF convergence threshold is 10^-5, we found that good values are q=5 and ε=0.005, while if it is fixed to 10^-7, we found q=4 and ε=0.001,0.002. Additional details on the selection of q and ε values can be found in Section S1 of the SI.
The QTR G-Ext approach is tested on four different systems.
The first system is dimethylaminobenzonitrile (DMABN) in methanol. The second system is 3-hydroxyflavone (3HF) in acetonitrile. The last two systems (OCP and AppA) are chromophores embedded in a biological matrix-namely, a carotenoid in the orange carotenoid protein (OCP) and a flavin in the AppA Blue-Light Using Flavin photoreceptor <cit.>.
Some information on the systems is reported in Table <ref>.
KS-DFT has been adopted to describe the QM subsystem, with the B3LYP hybrid functional <cit.> and the 6-31G(d) Pople's basis set <cit.>. This is coupled with a polarizable description of the environment, using the AMOEBA forcefield <cit.>.
For each system, we performed a QM/AMOEBA geometry optimization until a root-mean-square norm on the forces of 4 kcal/mol/Å is found and finally a 2 ps QM/AMOEBA NVT equilibration to obtain the starting point of the simulations presented in this work.
All simulations have been performed using the Gaussian-Tinker interface <cit.>.
We implemented the QTR G-Ext extrapolation approach in Tinker <cit.>.
To assess the quality of the guess density obtained by the QTR G-Ext extrapolation, we performed 10 ps BOMD simulations, with 0.5 fs time step, in the NVE ensemble, using the velocity Verlet integrator <cit.>. All systems were tested with an SCF convergence threshold fixed to 10^-5 and 10^-7 with respect to the RMS variation of density.
We compare our approach in terms of energy stability and number of iterations required to reach convergence with other two extrapolation schemes, which are the G-Ext scheme <cit.>
Γ_n =
∑_i=1^qα_i Γ_n-i, q=6,
where the α_i are computed by solving
min_∈^q{ d_n - ∑_i=1^qα_i d_n-i^2 + ε^2 ^2 }, ε=0.01,
and XLBO without McWeeny purification <cit.>
D_n = 2 D_n-1 - D_n-2 +
κ( D_n-1 - D_n-1) +
c ∑_i=1^8α_i D_n-i,
with fixed parameters κ=1.86, c=0.0016, and =(-36,99, -88, 11, 32, -25, 8, -1).
Figure <ref> provides the plot of the total energy along the DMABN simulation, with a 10^-5 SCF convergence threshold. Despite the non-fully time-reversible formulation of our newly implemented approach, we observe a great improvement with respect to the G-Ext scheme.
In particular, the QTR G-Ext method resembles the fully time-reversible scheme XLBO.
The same behaviour is almost imperceptible when the SCF convergence is set to 10^-7 (Figure <ref>), since the accumulation of errors that generates the energy drift when G-Ext is used is lower, so we can appreciate the same trend with all the extrapolation schemes. Analogous figures are reported in Section S2 of the SI for all tested systems.
To better evaluate the energy stability, we consider the average short-time fluctuation (STF) of the energy, which is computed by getting the RMS of the energy fluctuation every 50 fs and averaging over the trajectory,
and the long-time drift (LTD) for a long-time analysis, that is the slope of the linear regression line of the energy. Tables <ref> and <ref> disclose STF and LTD for convergence thresholds 10^-5 and 10^-7, respectively.
QTR G-Ext, G-Ext, and XLBO show comparable STF, which is specific for the system and is related to the time step for the integration. On the other hand, the absolute value of LTD is in general higher for 10^-5 simulations, in particular for G-Ext. We can state that the QTR G-Ext method solves the energy-drift issue of G-Ext, showing an LTD that is always similar to the XLBO one, suggesting again a good time-reversible behaviour.
The gain of our new methodology is not only in terms of accuracy (energy stability), but also in terms of the computational time of the simulation. Tables <ref> and <ref> report the average number of SCF iterations required to achieve convergence, as well as the standard deviation for 10^-5 and 10^-7 SCF thresholds, respectively. We remark that each strategy requires q previous density matrices, before having them available a standard SCF is performed. Therefore, for the computation of average and standard deviation, we discard the first q points.
The two tables show that for all the tested systems, the QTR G-Ext method requires the lowest number of SCF iterations, for both convergence thresholds. Moving averages of SCF iteration numbers during the simulations for all systems and with both SCF convergence thresholds are reported in Section S3 of the SI.
In conclusion, we presented the novel Quasi Time-Reversible Grassmann extrapolation scheme aimed at preserving the energy conservation of Newton's equations and, at the same time, at keeping low the number of SCF iterations. This scheme is based on the same properties of differential geometry of our previous extrapolation approach, ensuring that our guess density matrices retain all the mathematical properties of a density matrix. The innovation of this contribution lies in the symmetric combination of vectors in the tangent space, which proved to effectively preserve the stability of the total energy during the simulation. To validate its effectiveness, we conducted tests on systems of different sizes, and we obtained excellent results for all of them.
This work was supported by the Italian Ministry of University and Research under grant 2020HTSXMA_002 (PSI-MOVIE) and by the French ‘Investissements d’Avenir’ program, project Agence Nationale de la Recherche (ISITE-BFC) (contract ANR-15-IDEX-0003). ÉP also acknowledges support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No 810367–project EMC2) as well as from the Simons Targeted Grant Award No. 896630.
BS acknowledges funding by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC 2075 – 390740016. FP is member of the GNCS group of INdAM.
|
http://arxiv.org/abs/2307.03951v1 | 20230708105908 | High precision tests of QCD without scale or scheme ambiguities | [
"Leonardo Di Giustino",
"Stanley J. Brodsky",
"Philip G. Ratcliffe",
"Xing-Gang Wu",
"Sheng-Quan Wang"
] | hep-ph | [
"hep-ph",
"hep-th"
] |
sort compress
|
http://arxiv.org/abs/2307.04847v1 | 20230710183759 | Emerging jet probes of strongly interacting dark sectors | [
"Juliana Carrasco",
"Jose Zurita"
] | hep-ph | [
"hep-ph"
] |
=1
⟨ IP_2D⟩α_3DPU_dzm_Xm_π_Dc τ_π_D
IFIC 23-24
a]Juliana Carrasco,a]Jose Zurita[a]Instituto de Física Corpuscular, CSIC-Universitat de València,
Catedrático José Beltrán 2, E-46980, Paterna, Spain
[email protected]@ific.uv.es
A strongly interacting dark sector can give rise to a class of signatures dubbed dark showers, where in analogy to the strong sector in the Standard Model, the dark sector undergoes its own showering and hadronization, before decaying into Standard Model final states. When the typical decay lengths of the dark sector mesons are larger than a few centimeters (and no larger than a few meters) they give rise to the striking signature of emerging jets, characterized by a large multiplicity of displaced vertices.
In this article we consider the general reinterpretation of the CMS search for emerging jets plus prompt jets into arbitrary new physics scenarios giving rise to emerging jets. More concretely, we consider the cases where the SM Higgs mediates between the dark sector and the SM, for several benchmark decay scenarios. Our procedure is validated employing the same model than the CMS emerging jet search. We find that emerging jets can be the leading probe in regions of parameter space, in particular when considering the so-called gluon portal and dark photon portal decay benchmarks. With the current 16.1 fb^-1 of luminosity this search can exclude down to O (20) % exotic branching ratio of the SM Higgs, but a naive extrapolation to the 139 fb^-1 luminosity employed in the current model-independent, indirect bound of 16 % would probe exotic branching ratios into dark quarks down to below 10 %. Further extrapolating these results to the HL-LHC, we find that one can pin down exotic branching ratio values of 1%, which is below the HL-LHC expectations of 2.5-4 %. We make our recasting code publicly available, as part of the LLP Recasting Repository.
Emerging jet probes of strongly interacting dark sectors
[
August 12, 2023
=========================================================
§ INTRODUCTION
Extensions of the Standard Model with a strongly coupled, non-Abelian dark sector have received considerable attention in the recent years. The phenomenological possibilities are varied, due the large number of parameters, such as the gauge group dimension (number of dark colors), the matter field content (number of dark flavors), the mass hierarchies in the dark sector, and the coupling strengths between the dark sector and the Standard model, as well as the internal dark sector couplings. Considering a collider operating at a center of mass energy √(s), the subclass of models which confine at a scale Λ_D, and where the dark sector masses m_D ≲Λ_D << √(s) give rise to a class of signatures generically dubbed dark showers, in analogy to the familiar parton shower in the strong sector of the Standard Model, see <cit.> for a comprehensive review of the current phenomenological, experimental and theoretical status. Dark showers give rise to uncommon, exotic collider objects, such as trackless jets <cit.>, emerging jets <cit.>, semi-visible jets <cit.> and SUEPs <cit.>. An experimental program aiming at dark shower signatures at the LHC is already underway <cit.>.
It is customary <cit.> to dissect the collider phenomenology of dark showers into three pieces: i) production, ii) showering (which actually includes hadronization) and iii) decay. The production part consists of the parton-level production of dark quarks (customarily through 2 → 2 processes), the showering phase includes both the emission of dark gluons and dark quarks and the formation of bound states (dark hadrons), akin to the known behaviour of the strong sector of the Standard model. Indeed, the showering process yields a distinctive signal, as, unlike in normal searches for new phenomena, one is not targeting one or two new particles, but in principle many of them. Hence, the large multiplicity inherent to the dark parton showering is what makes them stand out from other Beyond Standard Model (BSM) searches. The decay modes of these dark hadrons (including SM particles such as leptons, quarks, gluons, or potentially not decaying at all contributing to the overall dark matter abundance) combined with the lifetime spectra of these dark hadrons span a large number of phenomenologically distinct scenarios. Decay benchmarks to guide the experimental exploration have been recently put forward in <cit.>.
If the dark sector contains some particles that are long-lived (another research direction that has received considerable attention in the recent past, see e.g. <cit.> for reviews), with lifetimes in the few mm to few meters, which decay into SM particles, they give rise to emerging jets (EJ). In this work we present a simple and flexible reinterpretation of the CMS search for emerging jets <cit.> seizing all publicly available information. We validate our procedure by reproducing the CMS results for their benchmark model, proposed in <cit.>. The software developed for the reinterpretation procedure has been uploaded to the LLP Recasting Repository <cit.>, which we expect to be useful for those interested in a straightforward reinterpretation of this study. We later apply our procedure to the concrete case of exotic Higgs decays, namely we obtain bounds on the branching ratio of the SM-like 125 GeV Higgs boson into a pair of dark quarks. We also show that the bounds arising from the reinterpretation of the emerging jet search, albeit not designed to target this scenario, can nonetheless provide leading constraints in large portions of the parameter space.
This article is organized as follows. In section <ref> we briefly review the phenomenology of emerging jets and the t-channel models from <cit.> which give rise to them. In section <ref>, we show our validation of the CMS emerging jet search. In section <ref> we apply our recasting procedure to a series of BSM decay benchmarks devised in <cit.>. We reserve section <ref> for conclusions. Technical details about the validation of the CMS Emerging Jet is described in Appendix <ref>.
§ EMERGING JET PHENOMENOLOGY
When considering a strongly interacting dark sector, the known behaviour of QCD provides a guidance for the relevant phenomenological features of the model. A QCD-like sector with gauge group SU(N_C_D) and N_f_D degenerate Dirac fermions (dark quarks, q_D) would then exhibit asymptotic freedom, and confine at a scale Λ_D, where m_q_D≲Λ_D. When these particles are produced at a collider with a center-of-mass energy √(s)≲ 30 Λ_D[If Λ_D would be closer to the available energy √(s), then the dark quarks have limited phase space to shower. The formed dark mesons can then be probed by searches for resonant bound states see e.g <cit.>.], the dark quarks hadronize into dark hadrons (π_D, ρ_D, ω_D, ...) which are clustered into collimated dark jets. The resulting signatures will then depend on the dark hadron decay, more specifically on the lifetime spectrum c τ_D, but the main underlying theme is that the shower process can lead to a large multiplicity of objects, while in traditional searches only one or two new particles are targeted.
Following <cit.>, the dark shower can be decomposed into the three parts described above: production, hadronization (in the dark sector) and dark hadron decays. For the production at a collider, is necessary to connect the Standard Model to the dark sector, which is done through a portal. For emerging jets, the focus of this work, the model proposed in <cit.> employs a
a bi-fundamental scalar X (namely, charged under both QCD and the dark sector) which interacts with a SM down-type quark and a dark quark via the following Lagrangian
L⊃ - κ_ijq̅_R i q_D_j X .
While in principle κ is a 3 × N_C_D matrix, we consider here the case where there is one single universal coupling to the right-handed down type quark, to avoid bounds from flavour physics (FCNCs, neutral meson-mixing, rare decays). In this model, X is pair produced with a sizable rate through gauge interactions, and the subsequent X → d_R q_D decay (which happens with 100 % branching ratio) leads us to expect two light jets and two emerging jets. Since the mediator particle X appears in t-channel exchanges, the model of equation <ref> is colloquially referred to as a t-channel model.
In this paper we will also study the production via an s-channel SM Higgs boson h, which falls in the category of exotic Higgs decays <cit.>. Measurements of the Standard Model Higgs properties set Br(h → exotic < 0.16) at 95 % C.L. from both ATLAS <cit.> and CMS <cit.> with a total integrated luminosity of 139 and 138 fb^-1 respectively. These bounds are not completely model independent, as they assume that the Higgs boson couples to the electroweak gauge bosons with a strength equal or less to the SM one, which can be violated in certain BSM scenarios. The combination of ATLAS and CMS with 3000 fb^-1 is expected to set a limit of 2.5%, under the assumption that the current systematic uncertainties would be halved <cit.>, or 4% with the current systematic uncertainties <cit.>. This leaves ample room for the h → q_D q_D to occur with sizable rates. We note that the identification of a resonant, long-lived q_D q_D topology using Machine Learning techniques has been recently studied in reference <cit.> for the SM Higgs and in reference <cit.> for a TeV scale Z'.
The showering and hadronization in the dark sector is conducted through the Hidden Valley module <cit.> within Pythia8 <cit.>. The non-perturbative nature of the dark QCD-like theories prevents from consistently connecting UV and IR parameters based on a perturbative approach, and hence it is customary to consider a dark sector consisting of spin-1 dark vector mesons ρ_D, and of spin-0 pseudoscalar dark pions, π_D. As the π_D arise from a breaking of a chiral dark symmetry, they are parametrically lighter than the other mesons in the theory, which decay into dark pions if kinematically allowed. Hence, the phenomenology is dictated by the dark pion properties, in particular their lifetime c τ_π_D. We distinguish three possible regimes depending on the dark pion lifetime. If the dark pions decay promptly (c τ_π_D≲ 1 mm), they end up giving multi-jet signals[If some dark hadrons would remain stable, one would have an invisible component within a jet, giving rise to semi-visible jets <cit.>. ]. If, on the contrary, the dark pions are stable in the detector (c τ_π_D≳ 1 m) then they appear as missing energy, and can be targeted by the suite of missing energy signatures that are customarily searches in the dark matter program at the LHC <cit.>. For c τ_π_D∈ [0.001 - 1 ] m, the dark pions decay inside the detector volume with different decay lengths, depending on their boost and on the fact that their actual decay position is sampled from an exponential distribution.
The decay patterns of the dark pions can be quite varied. As mentioned before, in reference <cit.> to avoid dealing with non-trivial bounds from flavour processes, a 100 % decay rate to right-handed down quarks was assumed. Yet, the possibilities for the decay are quite numerous, and in that light reference <cit.> proposed five decay benchmark models, dubbed decay portals, based on a minimal set of theoretical priors. These decay portals describe how the pseudoscalar and vector dark mesons decay into Standard Model particles [In reference <cit.> the pseudoscalar and vector mesons are noted as η̃ and ω̃. For a unified notation in this paper, we replace them by π_D and ρ_D, respectively. ]. If π_D decays into gluons (photons) through a dimension 5 operator we have the gluon (photon) portal. If π_D instead couples to the Standard Model Higgs via the H^† H operator, then we have the Higgs portal, where the decays to the SM quarks follow the Yukawa hierarchy of a SM-like Higgs with m_H = m_π_D. The other two decay portals have the π_D decaying either through its mixing with the photon (akin to the γ-ρ mixing in the Standard Model), or through the chiral anomaly into a pair of dark photons A^', inspired by the π_0 →γγ SM process. The corresponding Pythia configuration cards for each of these portals can be generated through the public python script <cit.>.
The number of free parameters in both our scenarios is still quite large, and here we follow some additional choices made in the literature. Regarding the t-channel EJ model, N_C_D=3 and N_f_D=7 are inspired by the study of <cit.>, and the dark sector mass parameters are chosen in proportion Λ_D: m_q_D: m_ρ_D: being 2:2:4:1. This choice ensures that the vector meson always decays into two dark pions, and was followed by the CMS collaboration in their emerging jets search <cit.> [The inclusion of a non-trivial flavour structure, where Λ_D > m_q_D is assumed, was studied in <cit.>.].
The free parameter for the analysis are then , and . Regarding the s-channel SM Higgs production with its different decay portals we assume N_C_D=3 and N_f_D=1, and Λ_D: m_q_D: m_ρ_D: m_π_D is now 2.5:0.4:2.5:1. Since the mediator mass is known, the only free parameters of the model are , and the exotic branching ratio, H → q_D q_D. We note that while in reference <cit.> a minimum proper lifetime as a function of the mass for each decay portal was estimated from theoretical considerations, we prefer to remove those prejudices and consider the three parameters as fully independent and uncorrelated.
A few details about our simulations are in order. Within the Pythia Hidden Valley module, we set the parameter to 1.1 Λ, and the flag is set to 0.318, following the considerations discussed in Appendix A of <cit.>. In section <ref> we employ Pythia version 8.212, used in the CMS study, as our aim is to reproduce the published limits. For section <ref> we employ instead Pythia 8.307, because (as explained in <cit.>) this version has corrected a previous flaw in the code, that tended to overproduce hidden hadrons at very low p_T.
§ VALIDATION OF THE CMS EMERGING JET SEARCH
In this section we discuss in detail the validation of the CMS search for emerging jets using a total integrated luminosity of 16.1 fb^-1 <cit.>.
The CMS collaboration targets the dark QCD model from Equation <ref>, via p p → X X followed by X → q q_D. Hence naively one expect to find two emerging jets and two SM jets. Events are selected by passing the H_T > 900 GeV trigger, where H_T is the scalar sum of the transverse momenta of all hadronic jets in the event, clustered with R=0.4 using the anti-k_T algorithm <cit.> applied to all tracks with p_T > 1 GeV. These events are required to have at least four jets within |η| < 2.0, and they undergo a further selection using kinematical variables to tag these jets as “emerging” and to define signal regions (called sets in the CMS paper). The explicit requirements are collected in Appendix <ref>, together with the 95 % C.L. limit to the number of signal events in each selection set, S_95^i.
The total number of signal events in each set can be computed as
N_S^i (, , ) = σ (p p → X X) × ( BR(X → q q_D))^2 × A_i (, , ) × L
where L is the total integrated luminosity, A_i is the acceptance for the i-th set number [In this article, we will use the terms “efficiency” and “acceptance” interchangeably to denote the A_i function.] and the production rate σ ( p p → X X → q q_D q q_D) has been decomposed into the pair production cross section for X pairs (which proceeds through gauge couplings and hence is independent of and ) times a branching ratio of X → q q_D, which we set to unity along this work [We have explicitly verified that the narrow width approximation is fulfilled in our model points, which allows us to factorize the total rate into a cross section and a branching ratio.]. To benchmark the search, CMS considered the following parameters:
* [GeV]={400,600,800,1000,1250,1500,2000} .
* [GeV]={1,2,5,10} .
* [mm]={1,2,5,25,45,60,100,150,225,300,500,1000}.
For the =5 GeV case, CMS has provided the acceptances A_i in the - plane, indicating which of the seven selection numbers is the most sensitive one. In other words, for each - point scanned, with =5 GeV, only one of the possible seven A_i functions is given.
To validate the search, we proceed in three steps. First, we check that using the provided A_i efficiencies we can reproduce the published 95 % C.L exclusion limits. In a second step, we check the degree of accuracy that we obtain for the two published kinematic distributions for the emerging jet tagging variables, and finally we show that we can reproduce with reasonable accuracy the said efficiencies and exclusion limits. The last step is crucial for our analysis, since this is what allows for reinterpretation, i.e.: derive limits from a experimental search on a model that has not been targeted by the experimental collaboration.
In what follows, we define “exclusion” by requiring that the ratio of our predicted number of events from equation <ref> over the excluded one,
R_95 = max_i{N_S^i/S_95^i} ,
is equal to unity, which is a common practice when performing reinterpretations <cit.>.
§.§ Exclusion using published efficiencies
We start by comparing the published exclusion limit with those that can be derived from using the A_i map from <cit.>. We present our results in figure <ref>, where the published CMS exclusion is shown as solid black. For our results, we need to provide a production cross section for the p p → X X process. On one hand, we use the cross section reported during the run of Pythia 8, which is a leading-order (LO) result in the strong QCD coupling α_S, shown in green. Second, we employ the cross sections used by CMS from <cit.> corresponding to down-type squark pair production, computed at the next-to-leading order in perturbation theory, and including a next-to-leading logarithmic correction from soft gluon resummation <cit.>, which is displayed in blue.
We conclude that the provided A_i values are self-consistent, and we also verify that the limits were derived using NLO cross sections.
§.§ Kinematic distributions
As a second step of our validation, we will employ the published kinematic distributions. To tag the jets passing the selection as emerging, the following track-based variables <cit.> are considered:
* : the median of the unsigned transverse impact parameter.
* : distance between the z position of the primary vertex (PV), z_PV and the z position of the track at its closest approach to the PV.
* D_N: the 3-D distance between the track and the primary vertex, weighted by the inverse resolution,
D_N^2 = ( z_PV - z_trk/0.01 cm ) ^2 + ( r_trk-r_PV/σ_r)^2 .
* : the ratio between the scalar p_T sum of all tracks with D_N < (certain value), normalized by the scalar p_T sum of all tracks, hence 0 ≤≤ 1.
We illustrate the variable in figure <ref>. From the primary vertex located at (z,r) = (0,0) we consider that only a SM jet (which tags the vertex) and a long-lived dark pion emerge. The dark pion decays at (z_tr, r_tr) into three tracks 1,2,3, which for illustration purposes we consider as giving rise to only one jet. The track trajectories meet at the decay vertex: the tracks are drawn in black, and their prolongations in grey. We indicate with d_i the closest distance between the i-th track and the primary vertex, hence the r(z) component gives the transverse (longitudinal) impact parameter. In the figure, the median of the (d_i)_r corresponds to (d_2)_r, hence the jet originating from the dark pion has = |(d_2)_r|.
Of the above variables, CMS presents results for and , before the selection cuts (which we describe below). Regarding the additional two variables, the variable D_N should be small for tracks originating from prompt particles, and large for displaced tracks; while is used for pile-up rejection. We note that both D_N and enter in these distributions through the definition of , which depends on D_N. Since CMS has not made explicit the D_N threshold employed in their figure 3, we have considered the three values employed to define the signal regions: 4, 10 and 20. Out of them, we have verified that the agreement is maximized for D_N=10, and hence D_N < 10 has been employed in the shown calculation shown below. As these last two variables are defined at the track level instead of at the jets level, we understand that kinematic distributions are not provided, which nonetheless would have provided an additional validation check for the proper reinterpretation of the search.
Two important effects ought to be included for a realistic attempt at the reproduction of the CMS results: the tracking reconstruction efficiency ϵ_ trk and the smearing of the impact parameters.
CMS reports the tracking efficiency dependence in terms of the p_T, η, and transverse vertex position (r) of the track <cit.>. From figure 8 of this article, one can see that above p_T > 1 GeV one can consider the tracking efficiency independent of p_T and to a lesser extent, of η. Regarding r, the efficiency diminishes with the displacement distance, as can be seen from figure 12 of <cit.>, which is obtained from a t t̅ sample at √(s)=7 TeV. The figure shows the cumulative efficiency for each of the iterations (0-5) of the tracking algorithm. While this effect is less relevant for lifetimes of few millimeters, it has an impact for the benchmark point with c τ_D = 25 mm and for larger lifetimes.
Nonetheless, it is clear that since the tracking efficiency is a very complicated function that can only be reliable obtained from having access to the full detector simulation (and detector information) we will pursue four different parametrizations for ϵ_ trk(r)
* Use the reported value of Iteration 5 from figure 12 of <cit.>. [It_5]
* Use the reported value of Iteration 4 from figure 12 of <cit.>. [It_4]
* Consider that tracks with at least one hit in the inner detector are reconstructed with 100 % efficiency, and with 0 % if not: ϵ_ trk(r) = Θ(r - 102 mm). [R].
* Consider ϵ_ trk(r) = 1, to illustrate the typical deviation obtained when no efficiency is considered. [A]
Regarding the impact parameter smearing, we note that for jets originating from SM quarks, one can expect to have =0, if the majority of the tracks of the light jet are prompt. However, the value of zero is obviously fictitious once the transverse impact parameter has been smeared to account for reconstruction effects. While the smearing functions have a non-trivial dependence with the η and p_T of the corresponding track (the resolution σ_r, which we have taken from figures 14a and 15a of <cit.>), the typical resolution would be of about 50 μm, and hence the variable would peak around this value for SM light-quark jets.
We show our results for the and variables in Figure <ref>, where we present the results for A as dashed red, and for It_5, It_4 and R in solid blue, orange and green. From the left panel we see that the naive A approach does not describe the distribution as well as any of the other criteria, while the three other curves fit the signal distribution with reasonable accuracy. Moreover, we also see that the proper inclusion of the transverse impact parameter smearing is necessary to explain the distribution of for the QCD jets from the signal, which is displayed as dashed purple.
From the right panel we see that the distribution for the signal does not change much with the different criteria It_5, It_4, R and A. Since on this variable one only applies a < 0.25 cut (see Appendix <ref>) our attention is only in the proper reproduction of the first bins, and the mismatch at the tails is not relevant for us. Hence we delay the final judgement of which parametrization of the tracking efficiency to use to the next step of our validation: to reproduce the published A_i efficiencies.
The analysis defined eight different jet identification criteria on the four relevant variables to consider a jet as emerging. These criteria are supplemented by the requirement to have a minimum of two EJs , or one EJ jet with large transverse missing energy (MET), and by additional cuts on H_T and on the p_T of the four hardest jets. The combination of the EJ criteria and the additional cuts define seven selection sets. The explicit requirements are collected in Appendix <ref>, together with the 95 % C.L. limit to the number of signal events in each selection set, S_95^i.
§.§ Reproducing efficiencies and exclusion limits
If our interest would be to perform a reinterpretation of the emerging jets results in the context of the same model used by the collaboration (or one with a similar topology) then we could employ the reported acceptances A_i to derive the published limits, as we did in Section <ref>. We stress that our goal is to perform a flexible reinterpretation of this search, namely to employ it to derive limits on a model that the search has not considered. Hence, what we need is to fully validate our pipeline to compute the acceptance of the selection sets for the benchmark model used in the CMS study. We show in Figure <ref> the ratio of our computed acceptances over the published CMS results in the left panels (the color bar indicate the A_i value from CMS) and the obtained exclusion limits in the right panels, where we have employed the It_5
(upper row), It_4 (middle row) and R (lower row) parametrization of the tracking efficiencies. We can see that the best agreement is obtained with the R parametrization, while the other two tend to overestimate the efficiencies. We see that we have agreement up to 20-30 % for large masses in the iteration R, which degrades for lower masses and also extreme lifetime values, where the overall acceptances are nonetheless at the per-mille level or lower. We also note that the R parametrization also gives an acceptable exclusion limit, and hence we decide to adopt it for the rest of the article. We note that with more examples provided by CMS (or simply by providing the efficiencies in all signal regions) one could attempt a more complex parametrization of the efficiency.
We consider hence this search as validated, and will proceed in the next section to derive bounds on the parameter space of Exotic Higgs decays. Our analysis code that allows us to derive the exclusions have been uploaded to the LLP Recasting Repository <cit.>, making it publicly available to facilitate the reinterpretation of the emerging jets search for arbitrary models. Further instructions and the relevant documentation to run the code can be found in the Repository.
§ REINTERPRETATION FOR HIGGS MEDIATED DARK SHOWERS
When the SM Higgs h couples to the dark quarks the expected number of signal events reads
N_S^i (, ) = σ^ proc_ SM× BR (h → Q_D Q_D) × A_i (m_H, , ) × L ,
where now the only free physical parameters are the dark pion mass and its lifetime, and the exotic Higgs branching ratio into dark quarks.
To further define our framework, we need to select a decay portal for our dark mesons. We follow here the proposal of reference <cit.> and we consider the gluon, vector, Higgs and dark-photon portals.[It is obvious that the photon portal, where π_D →γγ, does not pass the emerging jet selection cuts. Hence this decay portal is not further considered here.] We have verified our implementation of these decay portal benchmarks by reproducing the dark meson multiplicities from reference <cit.>.[We are indebted to Simon Knapen for useful communication during the validation phase.] We start by analyzing the acceptance A_i as a function of the dark pion lifetime and masses, for the gluon decay portal, for all five considered production Higgs mechanisms, which we show in figure <ref>. It is worth mentioning here that we do expect the EJ search not to be optimal for Higgs decays into dark quarks, as it originally targets two quarks and two dark quarks, while the Higgs decay would only give at parton level two dark quarks. However, since we are keeping the ρ_D →π_D π_D channel open, and since there is additional radiation from the initial state gluons and from the decay portals themselves, we do still obtain acceptances on the 10^-4 range, which can suffice to obtain an exclusion given that with 16.1 fb^-1, O (10^6 ) Higgs bosons would be produced at the 13 TeV LHC via gluon-fusion.
From the figure we can see that the dependence in is non trivial, obtaining a maximum around 10 mm, while for the dependence is quite flat, except for the heavier masses of 20-30 GeV: those dark pions obtain a reduced boost from the Higgs compared to lighter ones. It is intriguing to see that, owing to the additional radiation, the ttH production has a higher acceptance, about an order of magnitude larger than gluon fusion, and about a factor of five larger than associated production with a vector boson. We note that vector fusion has the lowest acceptance, and this is due to the fact that the additional radiation in VBF goes in the forward direction, while the EJ analysis focuses on central jets. We stress that while we only show the gluon portal decay benchmark here, all the other portal decay models show an analogous behaviour.
The picture changes slightly once the production cross sections for each mechanism is considered, which is shown in figure <ref>. Here we multiply the maximum acceptance with the production cross section for each mechanism, and the total luminosity of the emerging jet search (16.1 fb^-1). Hence the y-axis directly displays the expected number of events for each production mode. We have added here the overall number of events obtained by summing over all possible production modes, in a dashed-brown line. We see now, that owing to the larger cross section of the GF mechanism (two orders of magnitude over ttH, factors of 15-25 for the modes involving gauge bosons), the overall number of events, A σ L, is larger by an order of magnitude compared to the other modes. We also see that the impact of including all decay modes instead of only gluon fusion amounts to about 20 % of the total number of events. In view of our findings we will focus from now on only on the dependence of our results with the lifetime for a =5 GeV mass, and we will also include all Higgs production modes in our study.
We study now the sensitivity for the different decay portals considered in ref <cit.>. To that end we present in figure <ref> the efficiencies as a function of the dark pion lifetime, for = 5 GeV. In order to obtain reliable estimates for these acceptances, we have simulated 10^7 Monte Carlo events per parameter space point.
Of the possible decay portals, we then find that the sensitivity is larger (and similar) for the dark photon and gluon portals, and lower (and similar) for the vector and Higgs portals. We then will select in what follows the gluon (G) and Higgs (H) decay portals, as they correspond to the extreme values for the efficiencies for the four portal scenarios considered. These two decay portals correspond to the following operators
π_D G^μνG̃_μν (G), π_D H^† H (H) .
In the gluon portal one expects a showered enriched with SM hadrons produced from the produced gluons, while in the Higgs portal the decays would follow a Yukawa-like structure, and one can expect a shower enriched with heavy flavour quarks.
Using the acceptance from figure <ref>, we show in figure <ref> the excluded exotic Higgs branching ratio as a function of the lifetime, for a dark pion mass of 5 GeV. The solid line is using the existing dataset from the EJ search (16.1 fb^-1). For comparison we show the ATLAS limit of 0.21, which was obtained with a 8 times larger dataset (139 fb^-1), shown in red dashed. For a fair comparison we rescale our EJ limit to this luminosity (dashed lines), assuming that the uncertainty is dominated by the statistical error, which given the event counts in the different signal regions, and the reported systematic errors, is a good approximation. We also include constraints from prompt searches using CheckMATE2 for prompt <cit.> and long-lived searches <cit.>, shown in dashed grey (green) for the gluon (Higgs) portal. These constraints come from prompt searches including missing energy from ATLAS, more concretely from <cit.>, using 139 fb^-1 of data: as the lifetime of the dark pions become large, many of them appear as missing transverse energy. We note that our benchmark choice for = 5 GeV correspond to a challenging phase space due to the softness of the decay products, otherwise it is clear that for heavier dark pion masses constraints from other searches would apply. One prominent example would be the
Zh production, with h decaying through light scalars or pseudoscalars into displaced jets <cit.>, which report meaningful bounds on the exotic branching fraction for pseudoscalar masses as light as 15 GeV. Finally, we include an estimation of the HL-LHC, for both the BSM Higgs branching ratio (taken from <cit.>) and for the statistic dominated approximation of the EJ search. It is worth stressing that the HL-LHC run would not be similar to the current LHC setup in terms of capabilities to deal with long-lived particles (cite some HL-LHC expectations), significantly improving in many aspects, hence these limits must be considered with a grain of salt (also the statistical dominance of the study might not be fully justified).
From the figure we see that the EJ search can obtain better bounds than the model-independent BSM branching fraction, for lifetimes in the 5-60 (7-50) GeV when using the combined production (gluon fusion production) for the Higgs portal decay benchmark. For the sake of illustration we will describe now the bounds on this model, but the behaviour would be similar for other decay portals. Prompt searches with missing energy can constrain large lifetimes (c τ≳ 400 mm). For clarity reasons we have refrained from showing HL-LHC extrapolations from CheckMATE, but they would only be more sensitive than the BSM Higgs study for lifetimes in the 300-700 mm range, with the exact value depending on the final HL-LHC limit. Nonetheless, the phenomenological picture is similar to the one with the current dataset: for low lifetimes the BSM Higgs limit dominates, in an intermediate regime the EJ reinterpretation takes over, and for longer lifetimes the BSM Higgs searches become more sensitive, with missing energy searches becoming relevant for long-lifetimes. As stressed before, exotic Higgs decays are not a target of the EJ analysis, and hence it would be interesting to consider the use of emerging jet taggers in other production modes. We leave this option for future work.
§ CONCLUSIONS AND OUTLOOK
In this work we have performed detailed studies focused on the reinterpretation of the CMS emerging jet search. This signature belongs to the class of signatures that are collectively dubbed as “dark showers”, which stem from having a strongly-interacting dark (secluded) sector. In this dark sector new matter (and gauge) fields are added, which are assumed to hadronize, like in the SM strong sector. In particular, emerging jets correspond to the case where the dark sector mesons are have macroscopically appreciable decay lenghts, which make these final states also fall in the class of exotic phenomena dubbed “long-lived particles” (LLPs).
Our reinterpretation procedure has been validated by carefully following the CMS study. We have obtained good agreement with the published distributions on the and variables, and also reproduced the publicly available efficiencies for the benchmark model employed in the search. We have reproduced the published exclusion limits through two different routes, one by employing directly the CMS published efficiencies and another one by computing the efficiencies ourselves through our own Monte-Carlo simulation. Here there is a large uncertainty in the exact parametrization of the tracking efficiency. We have attempted a few different parametrizations, and employed the one that, while possibly over-simplified, can reproduce the published efficiencies (and exclusion limits) with a reasonable accuracy.
We would like to stress that while the relevant information of the CMS study was publicly available and clearly explained, getting in contact with the authors of the experimental study was nonetheless needed in order to comprehend a few crucial details. Their response has been instrumental to understand details concerning the track efficiency and the impact parameter smearing used in the study. Since it would be desirable that a reinterpretation of an experimental study can be done without this contact (as it can happen that the main authors of a given analysis might not be always part of the collaboration), we also took the opportunity to comment in the text for which aspects a clarification was needed, and which additional material would have helped us to carry our the reinterpretation.
Using our validated pipeline, we have focused on the exploration of a SM Higgs boson decaying into two dark quarks (fermions charged solely under the new strong sector, akin to the SM quarks). To that extent, we have considered the inclusive production of the Standard Model Higgs from gluon fusion, Higgs-strahlung, vector-boson-fusion and associated production with a t t̅ pair, and analyzed four decay benchmark portal models proposed in <cit.>, which are dubbed gluon, dark photon, Higgs and vector portals. We have found that, while the efficiencies for the Higgs production rank in the 10^-3:-5 range, owing to the large production cross section we can obtain meaningful bounds in the relevant parameter space, which are competitive with the current exclusion on undetected Higgs branching ratio of 16 %, set by the ATLAS and CMS collaborations. We have checked, with the help of CheckMATE, that the existing prompt searches can bring meaningful bounds only for the large lifetime regime, ≳ O (100 mm). We have also considered the existing HL-LHC extrapolations for the undetected Higgs branching ratio, and compared them with a similar naive extrapolation of the emerging jet search sensitivity (relying only on statistical uncertainties being present). Yet, it is expected that the HL-LHC will have a number of improvements to detect long-lived particles, which could render the final projections better than our naive extrapolations.
As a byproduct of our analysis, we have made publicly available our Pythia 8 analysis code in the LLP Recating Repository <cit.>, which can be used to compute the experimental acceptance (and the exclusion limits) with arbitrary BSM models, provided they are implemented in Pythia8.
We would like to stress that the exotic Higgs decay exclusion from <cit.> is an indirect bound, based on a global fit to the observed Higgs properties. Hence, if a signal is detected, its characterization would require an independent study. In contrast, if the emerging jet search starts seeing an excess, one can already infer that a new long-lived object is being produced from a Higgs boson decay, information that is crucial for the proper characterization of a putative BSM signal.
We end by noting that the EJ requirements of having four hard jets do not precisely target the exotic decays of a SM Higgs boson. In spite of the analysis not being optimal, we see that we can exclude exotic branching ratio of 30 % in the gluon and dark photon decay portals, which can go down to the percent level for HL-LHC. Therefore, it might be worthwhile to explore EJ searches that focus on dark quark decays from a SM Higgs boson (or from a new scalar), which could have higher sensitivity than the model independent search for undetected Higgs branching ratios.
§.§ Acknowledgements
We would like to thank Juliette Alimena, Nishita Desai, Alberto Escalante del Valle, Simon Knapen, Emmanuel Francois Perez and Pedro Schawaller for useful discussions, and Baibhab Pattnaik for a careful reading of the manuscript. We are indebted to the authors of the CMS emerging jet analysis: Alberto Belloni, Yi-Mu Chen, Sarah Eno and Long Wang for their patience to answer our questions about technical details in their study.
JC and JZ are supported by the Generalitat Valenciana (Spain) through the plan GenT program (CIDEGENT/2019/068), by the Spanish Government (Agencia Estatal de
Investigación) and ERDF funds from European Commission (MCIN/AEI/10.13039/501100011033, Grant No. PID2020-114473GB-I00).
§ CMS EMERGING JETS
In this Appendix we include additional material from our CMS validation described in <ref>.
The CMS collaboration employs four variables to identify emerging jets in both signal and background regions, which have been defined in section <ref>. For the six signal regions, the specific requirements on each of these variables are shown in Table <ref>.
Based on these requirements, CMS further defines signal regions (called “sets” in the CMS paper), where a given EMJ criteria is accompanied by a set of cuts on the jets, requiring either two emerging jets, or one emerging jet plus large missing transverse energy. The event yield excluded in each signal region at the 95 % C.L. by CMS is shown in the rightmost column, S_95.
The information from these tables has been included in the companion code uploaded to the LLP Recasting Repository <cit.>. We have also collected there the details on the different tracking efficiency parametrization employed in this work.
JHEP
|
http://arxiv.org/abs/2307.04962v2 | 20230711015208 | Intrinsically motivated graph exploration using network theories of human curiosity | [
"Shubhankar P. Patankar",
"Mathieu Ouellet",
"Juan Cervino",
"Alejandro Ribeiro",
"Kieran A. Murphy",
"Dani S. Bassett"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.SI"
] |
Wasserstein Distributionally Robust Regret-Optimal Control under Partial Observability
The authors are affiliated with the Department of Electrical Engineering at Caltech. Emails: {jhajar,tkargin,hassibi}@caltech.edu.
Joudi Hajar
Taylan Kargin
Babak Hassibi
August 12, 2023
===========================================================================================================================================================================================================================
Intrinsically motivated exploration has proven useful for reinforcement learning, even without additional extrinsic rewards.
When the environment is naturally represented as a graph, how to guide exploration best remains an open question.
In this work, we propose a novel approach for exploring graph-structured data motivated by two theories of human curiosity: the information gap theory and the compression progress theory.
The theories view curiosity as an intrinsic motivation to optimize for topological features of subgraphs induced by the visited nodes in the environment.
We use these proposed features as rewards for graph neural-network-based reinforcement learning.
On multiple classes of synthetically generated graphs, we find that trained agents generalize to larger environments and to longer exploratory walks than are seen during training.
Our method computes more efficiently than the greedy evaluation of the relevant topological properties.
The proposed intrinsic motivations bear particular relevance for recommender systems.
We demonstrate that curiosity-based recommendations are more predictive of human behavior than PageRank centrality for several real-world graph datasets, including MovieLens, Amazon Books, and Wikispeedia.
*Authors contributed equally.
§ INTRODUCTION
Providing a task-agnostic incentive for exploration as an intrinsic reward has proven useful in a variety of reinforcement learning settings, even in the absence of any task-specific (extrinsic) rewards <cit.>.
Termed curiosity in reference to the analogous drive in humans, prior formulations are based on different means of quantifying the novelty or surprisal of states encountered by an agent <cit.>.
If states are represented as graphs, the task-agnostic motivation to explore can additionally be content-agnostic, depending only on the topological properties of the visited state subgraph.
Leading theories of curiosity in humans are similarly content-agnostic, based on structural properties of a relational graph that connects atoms of knowledge without regard to their actual content <cit.>.
Theories of curiosity attempt to describe the intrinsic motivations that underlie human decision-making when acquiring information through exploration.
The information gap theory (IGT) argues that curiosity collects knowledge that regulates gaps in our understanding of the world <cit.>.
Exposure to a small amount of novel information pushes an individual's uncertainty about the environment past an acceptable threshold, creating an information gap.
Curious agents are driven to resolve the discrepancy by acquiring information to close the gap <cit.>.
An alternative account, the compression progress theory (CPT), posits that information-seeking behavior is motivated to build increasingly compressible state representations <cit.>.
Compression enables abstraction and improved generalization by emphasizing the essential latent structures of knowledge <cit.>.
Information gap theory and compression progress theory provide optimization objectives for the human exploration of graph-structured environments.
In this work, we demonstrate that network theoretic measurements of information gaps and compression progress can be meaningful exploration incentives for reinforcement learning (RL).
We train agents that use graph neural networks (GNN) to explore graph-structured data while optimizing for gap creation and improved compression (Figure <ref>).
Once trained, the agents navigate network structures to optimize certain topological features without regard to the content of the network.
The agents can be used to modify statistics that are based on random walk processes on graphs.
As an example, we use data of humans traversing spaces with natural graph structure—books and movies to review or Wikipedia pages to visit—to compute node centrality measures that best align with human navigation data.
Our primary contributions are the following:
* We adapt intrinsic motivations for human curiosity as reward functions for reinforcement learning.
* We replace expensive reward computations with graph neural networks. Our method is computationally efficient and generalizes to shorter and longer exploratory walks and to smaller and larger environments than are seen during training.
* We demonstrate that modifying measures of node centrality with curiosity-trained agents increases alignment with human behavior in real-world graph datasets without using any domain-specific feature information.
§ RELATED WORK
Human curiosity as graph exploration. Curiosity in humans is commonly conceptualized as the intrinsic motivation to gather information from the environment <cit.>.
Humans acquire information even when it is expensive to obtain <cit.> and may have no immediate tangible utility <cit.>, suggesting that exploration is inherently valuable.
Recent work has expanded the acquisitional framing of curiosity with a more general connectional account.
This perspective defines curiosity as an exploratory walk on a graph.
Here, curiosity entails building a growing knowledge network by acquiring informational units as nodes and their relationships as edges <cit.>.
The state of an individual's knowledge is viewed as the subgraph of the environment induced by the visited nodes <cit.>.
Under this formulation, humans explore Wikipedia via trajectories with fewer information gaps and greater network compressibility than relevant null models <cit.>.
Intrinsic motivations in reinforcement learning.
The need for improved exploration has led reinforcement learning to incorporate curiosity-like intrinsic motivations into its algorithmic framework <cit.>.
Exploration rewards in RL take several forms.
At the core of all approaches is an inducement for the learning agent to seek novelty.
Count-based approaches encourage visits to unfamiliar or infrequently visited states <cit.>.
When the state space is large, enumerating the frequencies of visits to all possible states is prohibitively expensive.
To overcome this challenge, neural density models derive uncertainty-based pseudo-counts <cit.>.
A complementary perspective emphasizes model building and formulates curiosity rewards in terms of learning progress and surprisal <cit.>.
For instance, in the prediction error approach—alongside an extrinsic task—the agent attempts to learn a model of the environment's dynamics.
Curiosity rewards are proportional to the model error when predicting transitions between states.
Memory-based methods assign rewards based on how different a newly visited state is from those stored in memory <cit.>.
Instead of a prescriptive approach, parametric methods attempt to explicitly learn an intrinsic reward function <cit.>.
In general, improved exploration is a means to an end, with intrinsic rewards supplementing extrinsic task-specific rewards.
Graph combinatorial optimization and reinforcement learning. Combinatorial optimization entails selecting elements from a finite set of options such that the chosen subset satisfies an objective function <cit.>.
Graph analyses often involve combinatorial optimization, with graph structure imposing constraints on the solution space.
Recent work combines graph neural networks and reinforcement learning to construct solutions by incrementally adding nodes to a partial set <cit.>.
First, a GNN constructs an embedding for the candidate solution; second, an agent, for instance, a deep Q-network (DQN), trained via RL, selects an action to expand the solution <cit.>.
The two networks can be trained end-to-end with an optimization objective driving gradients for learning.
This approach solves various graph combinatorial tasks, such as the traveling salesperson problem <cit.>, finding the maximum independent set <cit.>, or the minimum vertex cover <cit.>, and identifying isomorphic subgraphs <cit.>.
Instead of uncovering nodes, GNNs can also sequentially collapse nodes into each other with implications for matrix multiplication <cit.>.
GNNs, in combination with RL, have also been used to build and rewire graphs such that they possess high values of specific features of interest <cit.>.
§ METHODS
Our goal is to train an agent to explore the environment while optimizing for a structural property of the visited subgraph.
Consider a graph-structured environment 𝒢 = (𝒱, ℰ) with node set 𝒱 and edge set ℰ⊆𝒱×𝒱.
Let 𝒱_T = {v_1, v_2, ⋯, v_T}⊆𝒱 be an ordered set of explored nodes at time T.
The corresponding subgraph trajectory is the sequence 𝒮_1⊂𝒮_2⊂⋯⊂𝒮_T, wherein the t-th subgraph 𝒮_t is induced by the first t visited nodes.
Specifically, given the graph 𝒢, the number of nodes to visit T, a graph feature function ℱ: 2^𝒢→ℝ, and a discount factor γ∈ [0,1], we seek an ordered set 𝒱^*_T such that ∑_t=1^Tγ^t-1ℱ(𝒮_t) is maximal.
The feature function acts as an intrinsic reward to encourage exploration.
The discounting parameter determines the extent to which the future values of ℱ factor into the decision-making at every step.
Drawing inspiration from human curiosity, we adopt information gap theory and compression progress theory to design two functions, ℱ_IGT and ℱ_CPT.
§.§ Network theories of curiosity
Information gap theory views curiosity as an intrinsic motivation to regulate gaps in knowledge.
For humans, new information pushes the level of uncertainty about the environment past an acceptable threshold, creating an uncertainty gap.
Curiosity seeks to find information units to close this gap.
By modeling the state of knowledge as a graph, we can characterize information gaps as topological cavities.
In a graph, cavities can take several forms: dimension 0 cavities represent disconnected network components, whereas those of dimension 1 represent non-triangular loops of edges (Figure <ref>A).
In order to identify and count topological cavities, a graph is first converted into a higher-order relational object known as a simplicial complex <cit.>.
A simplicial complex is comprised of simplices.
Geometrically, a d-simplex is a shape with flat sides formed by connecting d+1 points.
For 0 ≤ d ≤ 2, by definition a node is a 0-simplex, an edge is a 1-simplex, and a filled triangle is a 2-simplex.
We can construct a simplicial complex by assigning a d-simplex to each (d+1)-clique in a binary graph.
In a simplicial complex, a d-dimensional topological cavity is identified as an enclosure formed by d-simplices that a higher-dimensional simplex cannot fill.
We refer the reader to Refs. <cit.> for a more comprehensive treatment of algebraic topology.
Given a simplicial complex, the d-th Betti number β_d counts the number of topological gaps of dimension d.
Prior work examining human knowledge-network-building finds compelling evidence in support of information gap theory when gaps are conceptualized as 1-dimensional cavities <cit.>.
In this work, at each time step t with a visited subgraph 𝒮_t, we assign rewards equal to β_1,
ℱ_IGT = β_1(𝒮_t).
Compression progress theory posits that curiosity is a drive to compress the state of knowledge <cit.>.
During graph exploration, at each step t in the trajectory, the compression reward can be assigned as network compressibility <cit.>.
Consider a subgraph 𝒮_t with t nodes and q edges, represented by a symmetric adjacency matrix M ∈^t × t.
Information about the subgraph's structure can be encoded in the form of a random walk x = (x_1, x_2, … ).
The walk sequence is generated by randomly transitioning from a node to one of its neighbors.
Thus, for a random walk on 𝒮_t, the probability of transitioning from node i to node j is P_ij = M_ij/∑_jM_ij.
Since the walk is Markovian, its information content (or its entropy) is given by H = -∑_iπ_i∑_j P_i jlog P_i j.
Here, π_i is the stationary distribution representing the long-term probability that the walk arrives at node i, given by π_i = ∑_jM_ij/2q.
Assigning nodes to clusters leads to a coarse-grained sequence y = (y_1, y_2, … ).
The number of clusters n can be used to define a scale of the network's description s = 1 - n-1/t.
For example, when n = t, the network is described at a fine-grained scale s = 1/t; at the other extreme, when n = 1 the network is described at the coarsest scale s = 1.
At every description scale in between, it is possible to identify a clustering of nodes that minimizes the information rate (Figure <ref>B).
After computing these optimal clusterings across all scales, we arrive at a rate-distortion curve R(s), representing a bound on the information rate as a function of the scale s.
The compressibility C of the network is then given as the average reduction in the information rate across all scales <cit.>, C = H - 1/t∑_s R(s).
Therefore, the compression reward is
ℱ_CPT = C(𝒮_t),
where C(𝒮_t) denotes the compressibility of subgraph 𝒮_t.
§.§ Reinforcement learning for graph exploration
We formulate the graph exploration problem as a Markov decision process (MDP) <cit.>:
* States: The state is defined as the subgraph induced by the visited nodes at time t, 𝒮_t = 𝒢[𝒱_t]. We specify the initial state 𝒮_1 by randomly selecting a starting node v_1 ∈𝒱. Each state represents a partial solution to the broader sequential exploration task.
* Actions: The agent can transition to any neighbor of the most recently visited node.
We denote the neighborhood of a node v as 𝒩(v) = {u ∈𝒱|(v, u) ∈ℰ}. Therefore, given the state at time t, the set of available next nodes is 𝒜(𝒮_t) = 𝒩(v_t)\𝒱_t.
If no nodes are available in the immediate neighborhood, we expand the action set to include all neighbors of the explored subgraph.
* Transitions: Given the pair 𝒮_t and v ∈𝒜(𝒮_t), the transition to state 𝒮_t+1 is deterministic with P(S_t+1| S_t, v) = 1.
* Rewards: The reward at time t is defined as R_t = ℱ(𝒮_t).
We train RL agents using either ℱ_IGT or ℱ_CPT as the reward function.
The policy π(v |𝒮_t) maps states to actions, fully describing the agent's behavior in the environment.
At each step, the agent makes decisions using a value function Q(𝒮_t, v), which evaluates candidate nodes v ∈𝒜(𝒮_t) in the context of the currently explored subgraph.
The function measures the total (discounted) reward that is expected to accumulate if the agent selects action v in state 𝒮_t and thereafter follows policy π.
In turn, the policy can be viewed as behaving greedily with respect to the value function, π = max _v ∈𝒜(𝒮_t) Q(𝒮_t, v).
Solving an MDP entails finding an optimal policy that maximizes the expected discounted sum of rewards.
We parameterize the value function Q using a GNN Φ(·):𝒢→ℝ.
GNNs build vector embeddings for nodes by iteratively aggregating their features with those from their local neighborhoods <cit.>.
Each aggregation step is typically followed by a fully connected layer and a non-linear activation function.
Depending on the number of rounds of aggregation, features from more distant locations in the graph can inform the embedding for each node.
Specifically, we use the GraphSAGE architecture <cit.>, where at the l-th round of feature aggregation, the embedding for node u is given as,
h_u^(l)=f^(l)(h_u^(l-1), h_𝒩(u)^(l-1))=g[θ_C^(l) h_u^(l-1)+θ_A^(l)Ã(h_𝒩(u)^(l-1))],
where à represents the aggregation operator, g[.] is the activation function, and θ_C and θ_A are parameters for combination and aggregation, respectively <cit.>.
We use the local degree profile (LDP) of each node as the initial set of features <cit.>.
LDP comprises various features of a node's neighborhood, including its own degree, the minimum and maximum degrees of its neighbors, and the average and standard deviation of the degrees of its neighbors.
We train GNNs for exploration using the DQN algorithm, with a replay buffer for experience sampling, a target network, and a decaying ϵ-greedy exploration rate <cit.>.
Details of the full neural network architecture and the training process are included in the Supplementary Materials.
§.§ Curiosity-biased node centrality
Several graph theoretic quantities can be defined in terms of random walk processes on a graph.
We can use agents trained to explore graphs to bias random walk processes and, by extension, the corresponding quantities.
PageRank is a widely recognized algorithm that assigns node centrality scores to graph data <cit.>.
The per-node score η can be interpreted as the stationary distribution of a random walk process on a network.
With probability α, a random walker moves along an edge from node v_i to one of its neighbors.
The probability of reaching a connected node v_j is P_ij.
Alternatively, with probability 1-α, the walker jumps, or teleports, to a random node in the network.
The probability of jumping to node v_k is q_k.
Under conditions of irreducibility and aperiodicity <cit.>, the stationary distribution is given as
∑_i (I-α P_ij^t)η_i = (1-α)q_j.
The PageRank algorithm follows a random walk that is entirely Markovian.
Typically, the probability P_ij depends solely on the out-degree of v_i and, in the case of node-weighting, on the personalization vector q.
Personalized PageRank biases the random walk process using q_k by taking into account nodes that are already visited in the network <cit.>.
We can integrate agents trained to optimize for the exploration objectives described earlier into the PageRank algorithm.
Specifically, given an already visited subgraph, we propose to modify transition probabilities using Q-values assigned to candidate nodes.
Consider a non-Markovian random walker sitting at node v_l with a path history V_l= { v_1, ⋯,v_l-1,v_l}.
The visited nodes in the path induce a corresponding subgraph 𝒮_l.
Paths are built starting from the most recent initialization or teleportation event.
We use a Q-value function trained to optimize for an objective ℱ to bias the walker.
The transition probability from node v_l to node v_m can be re-defined as,
P^ℱ_lm(𝒮_l) ≡(1-p_g)p_g^rank(Q(𝒮_l, v_m))-1/1-p^| 𝒜(𝒮_l)|, v_m ∈𝒜(𝒮_l),
0, otherwise,
where rank(Q(𝒮_l, v_m)) is the rank of the Q-value for action v_m and p_g∈ [ 0,1] is a parameter that controls how likely the walker is to select actions greedily.
To compute biased per-node PageRank values, we simulate a walker using P^ℱ_ij(𝒮_i) until probabilities converge.
§ EXPERIMENTS
§.§ Exploration in synthetically generated networks
We train a curiosity-based GNN agent to explore synthetically generated graph environments.
Each environment is constructed to have N = 50 nodes.
Each episode lasts for 10 steps and, therefore, consists of visits to 10 distinct nodes.
We examine four synthetic graph models that exhibit a broad range of degree profiles and topologies <cit.>:
* Erdös-Rényi (ER): The ER model produces random graphs by adding edges between nodes with probability p. We set p = 0.2.
* Barabási-Albert (BA): Starting with a randomly connected skeleton of m nodes, the BA model, also known as the preferential attachment model, adds nodes sequentially.
Each new node is connected to m existing nodes with a probability proportional to node degree.
This “rich-gets-richer” growth scheme results in graphs with heavy-tailed degree distributions.
We set m = 4.
* Random geometric: Graph-structured environments, such as transportation networks or power grids, are embedded in physical space. Random geometric graphs model such environments by placing nodes within a unit cube of specified dimensionality.
The model places nodes uniformly at random inside the cube.
An edge connects a pair of nodes if the distance between the nodes is less than or equal to a radius value.
For a 2-dimensional space, we set the radius value to 0.25.
* Watts-Strogatz (WS): Many real-world networks possess a “small-world” topology, whereby distant nodes can be reached by a small number of hops from any node in the graph.
The WS model creates graphs with a small-world topology by creating a ring graph and adding edges from each node to its k nearest neighbors.
Each edge is then rewired at random with probability p.
We set k = 4 and p = 0.1.
For each of the four graph models, we build 100 training, 10 validation, and 10 testing environments.
After training, we evaluate the GNN agent in the testing environments against four baseline approaches:
* Random: Select a candidate node at random.
* Greedy: For each candidate node, build a candidate state subgraph. Evaluate the reward function for each subgraph and select the node that results in the biggest one-step improvement.
* Max Degree: Select the candidate node with the largest degree.
* Min Degree: Select the candidate node with the smallest degree.
The total average reward gathered by the different agents is presented in Table <ref>.
For the IGT reward, in all graph models except for ER, the GNN outperforms the greedy agent.
By contrast, the one-step-ahead greedy agent consistently performs best for CPT, with the GNN a close second.
Baseline approaches broadly perform well compared to the GNN for CPT than they do for IGT.
When exploring a graph with the IGT objective, adding a single node can close several topological gaps simultaneously, requiring careful consideration of options.
By contrast, compressibility is less sensitive to the choice of node at each step due to its strong correlation with the clustering coefficient <cit.>.
If exploring inside a cluster, neighbors of a node are likely to be neighbors of each other, lowering the likelihood that a single choice will significantly alter compressibility.
For instance, the max degree baseline performs well for the CPT objective in random geometric graphs because high-degree nodes are centrally placed and surrounded by dense, highly clustered neighborhoods <cit.>.
Barabási-Albert graphs, similarly, have highly clustered cores due to preferential attachment in their generative process <cit.>.
Watts-Strogatz networks have high clustering when the edge rewiring probability is low.
As a result, even random exploration in such topologies tends to occur inside clusters leading to greater compressibility.
In support of this view, the minimum degree baseline, which is likely to select a node outside of a cluster, is typically further apart from the performance of the GNN compared to the other baselines.
§.§.§ Trajectory length and environment size generalization
After training the GNN agent to explore 10 nodes in random geometric graph environments of 50 nodes, we evaluate generalization performance for shorter and longer trajectories and smaller and larger environments.
We test trajectory length generalization while holding environment size fixed at 50 nodes.
For walks shorter and longer than 10 steps, the GNN performs comparably to the greedy agent for both IGT and CPT (Figure <ref>).
We test environment size generalization by taking 10 steps on graphs that are smaller or larger than 50 nodes.
The GNN agent outperforms the greedy agent in smaller environments.
In larger environments, the GNN is superior to the greedy agent for IGT and exhibits comparable performance for CPT.
In summary, the performance of trained GNNs does not degrade for settings outside the training regime.
These results indicate that we can train GNNs for graph exploration in regimes where reward computations are relatively inexpensive due to the smaller size of subgraphs and expect them to scale to longer walks and larger networks.
We also report generalization results for the other graph models in the Supplementary Materials.
§.§.§ Time complexity
Using graphs of different sizes, we evaluate the computational efficiency of our approach by comparing the wall time for a forward pass through the GNN with that for a greedy evaluation of the rewards.
Figure <ref> displays results for random geometric synthetic graphs.
The time for greedy evaluation of the topological features for both IGT and CPT grows quickly with subgraph size, whereas the GNN offers a faster alternative.
Comparing the rewards for the two theories of curiosity, the information gap reward is significantly cheaper to evaluate compared to network compressibility.
Therefore, in addition to approximating human intrinsic motivations for exploration, we find that the GNN offers a route to efficient computation of meaningful topological features of graphs.
§.§ Alignment with human navigation of graph data
Next, we evaluate the utility of curiosity-trained agents in predicting human behavior in graph-structured environments.
To gather path-based information for our analyses, we use two types of real-world graph datasets.
Reviews enable us to approximate consumer paths on a similarity graph of available content.
We create two separate graphs, one comprising movies from the MovieLens dataset <cit.> and the other comprising books from the Amazon Product Reviews dataset <cit.>.
We also examine a second type of dataset consisting of user paths on Wikipedia in the Wikispeedia game environment <cit.>.
The three datasets are:
* MovieLens: The MovieLens dataset consists of movie reviews <cit.>.
We use IMDB user summaries and Word2Vec to construct vector embeddings for each movie.
We build a graph environment by treating each movie as a node.
For each movie, we use cosine similarity to add edges to the 20 most similar movies.
* Amazon Books: The Amazon Product Reviews dataset encompasses reviews for diverse products <cit.>.
To narrow our focus, we specifically extract and retain reviews associated with books. We filter out books with fewer than 150 reviews and limit our analysis to reviewers with at least 5 reviews.
To represent each book as a distinct entity, we use Word2Vec-based vector embeddings.
For each book, we add edges by identifying the top 20 most similar books based on their embeddings.
* Wikispeedia: The Wikispeedia dataset consists of paths collected for a navigation game on Wikipedia <cit.>. In the game, users are presented with a starting article and a destination article and are tasked with reaching the destination article using hyperlinks within Wikipedia. Here, the underlying hyperlink structure of Wikipedia acts as the graph environment.
We train GNNs for graph exploration in each of the three real-world environments for both information gap theory and compression progress theory.
To incorporate person-specific data, the PageRank hop vector q is modified to be zero for all nodes except a user's n_burn-in most recently visited nodes <cit.>.
We assign a uniform jump probability to the n_burn-in nodes, with q_k = 1/n_burn-in.
Each graph feature function ℱ yields a PageRank vector η^ℱ_i.
We combine these vectors linearly to obtain a final PageRank vector, denoted as η' such that η'(α, β̃, γ̃, δ̃) ≡β̃η_PR(α) + γ̃η_IGT(α) + δ̃η_CPT(α) where β̃^2 +γ̃^2 +δ̃^2 = 1 and η_PR is the score vector obtained using standard PageRank.
To evaluate this approach, we optimize the set of variables α, β̃, γ̃, δ̃ using a training set of transitions.
We then compare performance against unbiased PageRank, where only α is optimized.
Formally, we generate two sets of human transitions denoted as 𝐒_test and 𝐒_train.
These sets consist of portions of human trajectories with a length of n_burn-in+1.
Next, we perform Bayesian optimization to compute parameters â and â_bias for the two sets,
â ≡max_α∑_S∈𝐒_trainrank_v_burn-in(η_PR(α) )
â_bias ≡max_α, β̃, γ̃, δ̃∑_S∈𝐒_trainrank_v_burn-in(η'(α, β̃, γ̃, δ̃ ) ).
To evaluate our method, we calculate the ratio of improvement on the test set, given as
r_𝐒_test ≡∑_S∈𝐒testrank_v_burn-in(η'(â_bias ) ) / ∑_S∈𝐒testrank_v_burn-in(η_PR(â) ) .
Table <ref> displays r_𝐒_test in percentage terms for the three datasets when considering curiosity theories alone or in combination.
Across all combinations, improvement ranges from 2.9% to 32.2%, indicating that incorporating curiosity for the biasing of walks is useful.
Depending on the dataset, the IGT or CPT-trained agent performs better with similar values.
In the Wikispeedia data, however, CPT leads to improvement that is nearly four times higher than IGT.
The books and movie datasets exhibit similarities since the selection mechanism in both is not directed towards a goal.
By contrast, the Wikispeedia dataset involves goal-directed navigation.
Figure <ref>B shows the improvement in predicting the transitions made by humans in the Wikispeedia dataset.
We compare percentile ranks for each transition made by the human when making predictions with and without biasing the random walk process.
We find that biased curiosity assigns higher percentile ranks to actual transitions than standard PageRank.
We also analyze the distance from the initial node with respect to time for individual random walk trajectories (Figure <ref>C).
In general, observed differences between the biased walkers are small and fall within the standard deviation of the walk process.
These observations suggest that the differences observed in the biased PageRank algorithm are not solely attributable to changes in the diffusion properties of the random walks.
§ LIMITATIONS
In our implementation, when computing an embedding for the state subgraph, the GNN does not distinguish candidate nodes from those already visited.
Appending a one-hot vector to differentiate candidates could potentially lead to improved performance.
This approach would allow the network to recognize and, therefore, prioritize candidate nodes during the decision-making process.
The PageRank algorithm includes various hyperparameters that can be further fine-tuned; for instance, p_g or refining the distribution P^ℱ_ij(𝒮_i) that is used to select nodes for the walker.
§ DISCUSSION
We can use intrinsic motivations that underpin human curiosity to train neural networks to explore graph-structured environments with diverse topological structures.
Our approach generalizes to longer exploratory walks and larger environments than are seen during training.
Importantly, relying only on the structure of the visited subgraph and without any domain-specific node features, we find that our method is more predictive of human behavior than PageRank centrality for several real-world graph datasets.
|
http://arxiv.org/abs/2307.05474v1 | 20230711175835 | Fractonic Higher-Order Topological Phases in Open Quantum Systems | [
"Jian-Hao Zhang",
"Ke Ding",
"Shuo Yang",
"Zhen Bi"
] | cond-mat.str-el | [
"cond-mat.str-el",
"math-ph",
"math.MP",
"quant-ph"
] |
These authors contributed equally
Department of Physics, The Pennsylvania State University, University Park, Pennsylvania 16802, USA
These authors contributed equally
State Key Laboratory of Low-Dimensional Quantum Physics, Department of Physics, Tsinghua University, Beijing 100084, China
[email protected]
State Key Laboratory of Low-Dimensional Quantum Physics, Department of Physics, Tsinghua University, Beijing 100084, China
Frontier Science Center for Quantum Information, Beijing, China
[email protected]
Department of Physics, The Pennsylvania State University, University Park, Pennsylvania 16802, USA
In this work, we study the generalization of decohered average symmetry-protected topological (ASPT) phases to open quantum systems with a combination of subsystem symmetries and global symmetries. In particular, we provide examples of two types of intrinsic average higher-order topological phases with average subsystem symmetries. A classification scheme for these phases based on generalized anomaly cancellation criteria of average symmetry is also discussed.
Fractonic Higher-Order Topological Phases in Open Quantum Systems
Zhen Bi
=================================================================
§ INTRODUCTION
The rapid advancement of quantum simulators has sparked interdisciplinary research on the creation and manipulation of entangled quantum states within the noisy intermediate-scale quantum (NISQ) platforms. This development has attracted significant attention from both the condensed matter and quantum information communities <cit.>. Symmetry-protected topological (SPT) phases <cit.> serve a class of quantum states with non-trivial quantum entanglements and anomalous boundary states that have great opportunities to be realized in quantum devices, and they provide the resource states for measurement-based quantum computation and preparations for other highly entangled quantum states <cit.>. In particular, symmetry-protected topological phases with subsystem symmetries, a novel type of symmetry whose conserved charges are localized in rigid submanifolds of the whole system, have been shown to have practical advantages in realizing measurement-based quantum computations in certain schemes <cit.>.
From the perspective of condensed matter physics, subsystem symmetries present a fascinating opportunity to explore new forms of matter characterized by fractonic dynamics of their excitations <cit.>. These symmetries also offer a valuable platform for studying strongly interacting topological phases, as the usual single particle hoppings are typically prohibited. Currently, there is active research into the classification and physical properties of subsystem symmetry-protected topological (SSPT) phases <cit.>. Recently, a novel class of SSPT phases called fractonic higher-order topological phases has been introduced <cit.>. These topological phases exhibit symmetry-protected gapless modes that manifest only on specific lower-dimensional subspaces of the boundary, while the rest of the boundary remains gapped in a manner consistent with the symmetry. They serve as analogs to higher-order topological phases found in systems with crystalline symmetries <cit.>. However, these phases are inherently strongly interacting due to the presence of subsystem symmetries.
Although discussing SSPT phases in the context of the ground state is fascinating, for practical purposes, it is crucial to consider the impact of decoherence and/or dissipation on the quantum entanglement of these topological phases, as systems are inevitably coupled to environments <cit.>. Understanding whether symmetry-protected topological phases remain stable under such conditions is an intriguing and significant question. Specifically, decoherence and/or dissipation can break exact symmetries in closed systems, resulting in an average symmetry in open systems <cit.>. Consequently, investigating average SPT (ASPT) phases becomes crucial, particularly in NISQ platforms, where quantum dynamics is not solely governed by Hamiltonians. Recent findings indicate that a wide range of SPT persists in mixed-state settings <cit.>, and notably, there are numerous non-trivial SPTs whose existence requires the assistance of quantum decoherence, referred to as intrinsic ASPT phases <cit.>.
In this paper, we examine the higher-order subsystem symmetry-protected topological (SSPT) phases in strongly correlated open systems subjected to quantum decoherence. For concreteness, we will mostly consider (3+1)d systems with 2-foliated subsystem symmetries. After some introductory remarks in this section, we will introduce examples of higher-order subsystem average SPT (SASPT) phases, and in particular, we will discuss two types of intrinsic SASPT phases with examples. Then we will present a general scheme for the classification of the higher-order SASPT phases in Sec. <ref> using the idea of anomaly cancellation for average symmetries. In addition to exploring the systematic classification and construction scheme of SASPT states, we also study examples of dynamical phase transitions of the hinge modes due to the effects of decoherence or dissipation in Sec. <ref>. We conclude and discuss some future outlooks in Sec. <ref>. In the appendix, some detailed peripheral discussion about SASPT is provided.
§.§ Higher-order SSPT in clean systems
For higher-order SSPT in clean systems <cit.>, there has been a systematic way to classify and construct these phases in all dimensions for bosonic systems <cit.> which is based on the idea of anomaly cancellation. We will take a (3+1)d system with 2-foliated subsystem symmetry as an example to illustrate the idea. Suppose that we have a system that has a finite extension in the x and y directions while being infinite in the remaining direction. We assume subsystem symmetries along every xz and yz plane of the system. By definition, the boundary of this system is trivially gapped except for the four hinges. These hinge modes are protected by the subsystem symmetries (and possibly some global symmetry as well). In other words, each individual hinge mode carries anomalies of these symmetries. However, when we view this whole system as a quasi-one-dimensional system, with the subsystem symmetries now becoming on-site symmetry groups, clearly, as a physical one-dimensional system, the symmetry actions must be free of any 't Hooft anomalies. This is the consistency condition that we need to impose to classify the SSPT phases.
Note that this picture also automatically gives a “coupled-wire" construction for the SPT phase. In each unit cell of the coupled-wire construction, we put the four anomalous hinge modes together, which we refer to as four building blocks throughout the paper. Within a unit cell, the four building blocks together are anomaly free and can be realized in purely 1d. Then we consider some interacting Hamiltonian between the unit cells that preserve the subsystem symmetries. The anomaly cancelation condition is equivalent to the statement that the bulk of the system can be symmetrically gapped out by turning on symmetric interactions between the neighboring four unit cells. The hinge of the system, however, is left gapless because there are no modes to pair up with the assumptions of locality and symmetry constraints. A schematic illustration for the above physical picture is shown in Fig. <ref>.
More formally, the bosonic anomaly for the group G in d+1 space-time dimensions is characterized by a cocycle in d+2 dimension, namely [ν]∈^̋d+2(G, ) where ν is a representative group cocycle. Therefore, each anomalous hinge mode for 2-foliated higher-order SSPT in d-spatial dimensions is labeled by a non-trivial cocycle in ^̋d(G_s, ) where G_s is the subsystem symmetry group. Note that for each individual hinge, only a limited set of subsystem symmetry is involved. The anomaly-free condition is when we take the cocycles on the four hinges together and consider the full subsystem symmetry groups the system should carry no anomaly. It turns out this condition is equivalent to saying that the image of the following map between ^̋d[G_s, ] and ^̋d[G_s× G_s, ],
f_2(ν) ({g,g'})=ν({g})ν({g'})/ν({gg'}),
must be the trivial class in ^̋d[G_s× G_s, ].
We note that the above program based on anomaly cancellation is also applicable to fermionic systems. The anomalies of fermion SPT phases are described by generalized group cohomology <cit.>, and we only need to substitute the cocycle ν in Eq. (<ref>) by generalized cocycle describing the fermionic SPT anomaly for fermionic systems.
These conditions can be easily generalized to n-foliated structures in general dimensions and to incorporate additional global symmetries. For interested readers, we refer to more details in Ref. <cit.>. We will generalize these ideas into the classification of SASPT phases with decoherence in Sec. <ref>.
§.§ Coupled-wire construction
As mentioned above, the coupled-wire construction is a natural physical picture for SSPT in (3+1)d with 2-foliated subsystem symmetries. Here, we will rephrase the anomaly-free condition for constructing SSPT in this setting. As illustrated in Fig. <ref>, we have 4 building blocks per unit cell, and each block is an anomalous (1+1)d edge state of certain (2+1)d SPT state classified by ν∈ H^3[G_s, U(1)]. The arrangement of the 4 anomalous modes in one unit cell is ν,ν^-1,ν,ν^-1. The total anomaly of 4 building blocks in each unit cell is automatically canceled, therefore, the unit cell in principle can be realized in (1+1)-dimension. The anomaly cancellation condition described in the last section means physically one can get a symmetric gapped bulk state by turning on symmetric interactions between 4 neighboring unit cells. This condition can be checked using techniques in the framework of (1+1)d multi-channel Luttinger liquids <cit.>
Considering the 4 modes in the intersection of 4 neighboring unit cells, i.e. the modes within a blue plaquette in Fig. <ref>, the free part of Lagrangian has the following form
ℒ_0 = ∂_tΦ^TK/4π∂_xΦ+∂_xΦ^TV/4π∂_xΦ,
where K is the K-matrix dubbing the topological term, while non-universal V matrix dubbing the dynamical term. The goal to obtain a symmetric gapped bulk becomes to find a complete set of Higgs terms,
Ł_ Higgs=∑_kcos(l_k^T KΦ),
as backscattering of the boson fields in this set of modes that respects all relevant subsystem symmetries.
For a complete set of Higgs terms, we first require all terms in the set to be mutually commuting. Thus {l_k} should satisfy the “null-vector" condition <cit.>
l_i^T K l_j=0, ∀ i,j.
Also, we demand that the number of Higgs terms be the same as the number of helical modes in Eq. <ref> in order to gap all the gapless modes. In the end, we also need to make sure that there is no spontaneous symmetry breaking in the strong coupling limit of these Higgs terms by requiring the minors of the matrix formed by the l vectors to be 1 <cit.>. If we can find such a set of Higgs terms, then the bulk of the system can be fully gap out and the anomaly-free condition is checked. One can easily spot the remaining gapless mode on the hinge of the system, which is exactly the second-order gapless hinge mode of the SSPT state.
§.§ Topological phases with average symmetries
For an open quantum system, we are generally concerned with mixed density matrices. The generalization of short-range-entangled (SRE) pure states to mixed states are density matrices that can be prepared from a pure product state using a finite-depth local quantum channel, namely
ρ=ℰ(|0⟩⟨0|),
where the quantum channel can be formulated as
[ρ]=∑_jK_jρ K_j^†, ∑_jK_j^† K_j=1,
where K_j's are the local symmetric Kraus operators. An SRE mixed state generically has short-range correlations for all local operators. We would like to discuss symmetry-protect topological phases in such SRE density matrices.
For the density matrix, two types of symmetries can arise, namely the exact and average symmetries <cit.>. We label the exact symmetry group by K and the average symmetry group by G.
The exact symmetry K is defined such that for a symmetry operator U_k (k∈ K), the density matrix ρ is invariant by acting U_k individually on the left or right, say U_kρ= e^iαρ and ρ U_k^†=e^-iαρ. The average symmetry G is defined that for a symmetry operator U_g (g∈ G), the density matrix ρ is generally not invariant when acting U_g on the left or right individually, but invariant when acting U_g on the left and right simultaneously, say U_gρ U_g^†=ρ. Generically, the total symmetry group G̃ is an extension of G symmetry by K, which can be characterized by certain short exact sequence
1→ K→G̃→ G→1.
When encountering a subsystem symmetry, we will add a subscript s to denote it.
The concept of average symmetry-protected topological (ASPT) phases is based on the equivalence classes of density matrices under symmetric finite-depth local quantum channels. Specifically, two ASPT density matrices, ρ_1 and ρ_2, are considered equivalent if there exist symmetric finite-depth local quantum channels, ℰ_12 and ℰ_21, such that ℰ_12(ρ_1)=ρ_2 and ℰ_21(ρ_2)=ρ_1. The classification of ASPT states is achieved through the use of generalized group cohomology theory, which provides explicit classifications based on decorated domain wall constructions <cit.>. Roughly speaking, a nontrivial ASPT density matrix can be constructed as a classical collection of wavefunctions with G-symmetry defects decorated with exact K-symmetry SPTs. Different decoration patterns give different ASPT phases, as one cannot change the decorated K-symmetry SPT without using a deep quantum channel.
Noticeably, a new class of topological phases that only exists in mixed states, dubbed intrinsic ASPT, is discovered in Ref. <cit.>. The existence of these new phases is due to the modified consistency relation of the generalized cohomology theory. The basic idea is that the Berry phase consistency condition in the cohomology theory for pure-state SPT is no longer required in a mixed state as the Berry phase is not well defined. Thus, there can be decorated domain wall configurations that make no sense in an SRE quantum wavefunction but can exist in a mixed density matrix. General classification of ASPT phases with global exact and average symmetry are flashed out in Ref. <cit.>. We describe some detailed mathematical structures in the Appendix. <ref>. In the following, we will generalize the idea of ASPT to systems with subsystem symmetries and possibly some global symmetry as well.
§ SUBSYSTEM AVERAGE SPT PHASES
In this section, we present the basic idea of SASPT phases through a coupled wire construction, similar to what was done in the context of (3+1)d clean systems with 2-foliated subsystem symmetry <cit.>. In parallel to the construction of SSPT in clean systems, we arrange four “average" anomalous (1+1)d modes within each unit cell as depicted in Fig. <ref>, and aim to achieve a short-range entangled (SRE) bulk state through symmetric interactions and decoherence. The average anomalous (1+1)d modes can be viewed as the boundary of certain (2+1)d ASPT states and may or may not have a purification into an SRE state. Analogous to the clean systems, we will present the average anomaly cancelation condition for SASPT phases, which is equivalent to the emergence of an SRE bulk state through symmetric interactions and decoherence between different unit cells.
§.§ SASPTs with clean limits
First, we illustrate an example of SASPT with a clean limit. In this type of SASPT, since the bulk is already gapped in the clean limit, all we need to show essentially is that the hinge mode of the clean SSPT is stable against decoherence that turns part of the symmetry from exact to average. Consider a (3+1)d bosonic system possessing a 2-foliated subsystem symmetry ℤ_2 and a global time-reversal symmetry ℤ_2^T. We will first show that, in the clean limit, there is a nontrivial SSPT via wire construction. Then we will argue that, with decoherence that breaks the subsystem symmetry to average while keeping the exact ℤ_2^T symmetry, the non-trivial hinge modes cannot be turned into an SRE mixed state. Therefore, this SSPT phase is stable in open systems.
The wire construction is shown in Fig. <ref>. Each blue circle represents an anomalous theory carrying the 't Hooft anomaly of a (2+1)d _2×_2^T SPT state in clean systems. To show that an SSPT exists, the first objective is to introduce symmetric interactions between a different unit cell such that the bulk of the system can be gapped out without breaking any symmetry – which is the essence of the anomaly cancelation condition. Then one needs to check if there are any non-trivial hinge modes. Finally, if we allow any local quantum channels that break the subsystem _2 symmetry to an average subsystem _2 symmetry, we will show that the hinge modes remain nontrivial, and hence the system is a SASPT state.
To consider the gapping problem in the bulk, we need to write down the Lagrangian for each plaquette in Fig. <ref>. This can be conveniently presented as 8-component bosonic fields Φ=(Φ_1,⋯,Φ_4), where Φ_i=(ϕ_i1,ϕ_i2) denotes the bosonic degrees of freedom of the quantum wires in each plaquette. The kinetic part of the Luttinger liquid reads <cit.>
ℒ_0 = ∂_tΦ^TK/4π∂_xΦ+∂_xΦ^TV/4π∂_xΦ,
where K=(σ^x)^⊕ 4 is the K-matrix. Each block of σ^x is supposed to be the edge theory of a (2+1)d ℤ_2×ℤ_2^T bosonic SPT, which gives the following symmetry transformations
g:Φ→ (-1)^s_1(g)WΦ+δΦ.
Here s_1(g) characterizes if g is anti-unitary, namely
s_1(g)={ 0, if g is unitary
1, if g is anti-unitary.
.
In the plaquette, there are 4 independent ℤ_2 subsystem symmetries. Their symmetry transformation rules are given by the following
W^ℤ_2 = (σ^0)^⊕4
δΦ^ℤ_2^(1) = π(1,0,1,0,0,0,0,0)^T
δΦ^ℤ_2^(2) = π(0,0,0,0,1,0,1,0)^T
δΦ^ℤ_2^(3) = π(1,0,0,0,0,0,1,0)^T
δΦ^ℤ_2^(4) = π(0,0,1,0,1,0,0,0)^T
.
For the global time-reversal symmetry 𝒯∈ℤ_2^T
W^𝒯 = (-σ^z)^⊕4
δΦ^𝒯 = π(0,1,0,1,0,1,0,1)^T
.
To get an SSPT state, we should fully gap each plaquette in the bulk with symmetric interactions. To that end, we need to include four linearly independent symmetric Higgs terms,
ℒ_ Higgs=∑_k=1^4λ_kcos(l_k^T KΦ),
that satisfying the null-vector conditions (<ref>). One can easily check that the following vectors satisfy all these conditions
l_1 = (1,0,1,0,1,0,1,0)^T
l_2 = (0,1,0,-1,0,-1,0,1)^T
l_3 = (1,0,0,-1,0,1,-1,0)^T
l_4 = (0,1,-1,0,-1,0,0,1)^T
.
Moving on, we will inspect the properties of the boundary of the system. On a smooth boundary, there exist four dangling bosonic modes. It is obvious that one can introduce on-site mass terms that allow us to fully gap out the boundary site. This will not be true for a corner site, i.e., there is a dangling hinge mode at each corner. By design, each hinge mode is described by a Lagrangian that is precisely the anomalous boundary of the (2+1)d SPT with ℤ_2×ℤ_2^T symmetry. Hence, the whole system comprises an SSPT state.
Finally, we consider the effect of decoherence, which breaks the ℤ_2 subsystem symmetry down to an average symmetry, on the hinge modes. Since the hinge mode can be viewed as the edge of (2+1)d SPT with ℤ_2×ℤ_2^T symmetry, in hindsight the question is equivalent to asking if 2 + 1d SPT is stable after breaking the exact ℤ_2 symmetry to an average symmetry. The (2+1)d SPT used in the construction belongs to the nontrivial decoration class where a ℤ_2 domain wall is decorated with a (1+1)d SPT with ℤ_2^T symmetry. According to the general classification derived in Ref. <cit.>, this SPT is still a non-trivial ASPT when ℤ_2 symmetry is average and ℤ_2^T is kept exact. Therefore, its boundary, carrying an average anomaly, cannot be turned into an SRE mixed state.
While the argument above is generally valid, one can see this more explicitly with the following microscopic example. In clean systems, the hinge mode would be a (1+1)d Luttinger liquid in the form of Eq. (<ref>), with the K-matrix K=σ^x and _2×_2^T symmetry properties as
W^_2=1_2×2, δϕ^_2=π(1,0)^T
W^=-σ^z, δϕ^=π(0,1)^T
.
Then we consider a local quantum channel (x) of decoherence that breaks _2 to an average symmetry, namely
(x)[ρ_hinge]=(1-p)ρ_hinge+p K(x)ρ_hingeK(x)^†,
where K(x)∼cosϕ_1(x). It is the lowest order Kraus operator in terms of the ϕ fields that breaks ℤ_2 down to average while keeping the exact symmetry. The quantum channel has no effect on the correlation function of cosϕ_1 operators, which will remain power-law correlated in the decohered systems. Therefore, after applying the quantum channel to break _2 to an average symmetry, we obtain a (1+1)d power-law correlated mixed hinge state, which is consistent with the conclusion that the system is a nontrivial SASPT.
§.§ Intrinsic SASPT phases
In the case of on-site symmetry, it has been proposed that a significant class of ASPT is only well-defined in mixed states, lacking any counterpart in clean (closed) systems <cit.>. In this subsection, we demonstrate that a large class of SASPT phases also does not have clean limits and can only exist in mixed ensembles. We refer to these SASPT phases as intrinsic SASPT phases. Based on the wire construction picture, we find two types of intrinsic SASPT phases depending on the properties of the building blocks. In the following, we primarily discuss the cases of 3-dimensional systems with 2-foliated subsystem symmetries for clarity, although general cases are easy to construct.
* Type- intrinsic SASPT refers to the following situation. We consider coupled-wire models where each unit cell is composed of four building blocks consisting of anomalous modes which can be viewed as the (1+1)d boundaries of certain (2+1)d clean SPTs. The modes are arranged such that within each unit cell, they can be symmetrically gapped out in the clean limit, meaning the system admits a trivial “atomic" insulating phase in the clean limit. However, we want to consider the situation where it is not possible to gap out the modes from four neighboring unit cells in the clean limit using only symmetric interactions, namely a clean SSPT does not exist. In such a situation, if symmetric local decoherence can lead to an SRE bulk mixed state and leave the hinge of the system nontrivial, we will call such a system a Type- intrinsic SASPT.
* Type- intrinsic SASPT is different from Type- in that the building blocks in each unit cell do not have a clean limit. In particular, the building blocks are density matrices corresponding to the (1+1)d anomalous boundary of a (2+1)d intrinsic ASPT state <cit.>. The four building blocks are arranged such that the average anomaly cancels within the unit cell, meaning that the unit cell can be an SRE mixed state with average symmetries. Under such conditions, if symmetric interactions and decoherence can turn the four building blocks from four neighboring unit cells to an SRE mixed state and leave a non-trivial hinge, then we refer to such systems as Type- intrinsic SASPTs.
We note in the type- intrinsic SASPT, the system still admits a clean trivial insulator state, but a nontrivial SPT state exists only in a mixed ensemble. However, in a type- system, the existence of a trivial insulator already requires the system to be an open quantum system.
§.§.§ Type- intrinsic SASPT from decoherence-assisted anomaly cancellation
In this subsection, we focus on intrinsic SASPT states whose anomaly cancellation conditions can only be fulfilled in open quantum systems.
Consider the coupled-wire model discussed in Section <ref>. Each unit cell consists of 4 building blocks (depicted in Fig. <ref>). Each building block serves as an anomalous edge state of a clean SPT state. In the discussion of clean SSPT, one requires an anomaly cancellation condition in Eq. (<ref>) which can guarantee a symmetric gapped bulk by inter-unit-cell interactions. In the case of type- intrinsic SASPT, however, the anomaly cancellation condition (<ref>) does not hold in clean systems. Specifically, the type- intrinsic SASPT states correspond to elements in H^p[G_s, h^q(K_s)] whose image under the f_2 map in Eq. (<ref>) is an element in H^p+q[G_s^2, U(1)]. However, when breaking the subsystem symmetries down to average symmetry by local decoherence channel, such anomaly vanishes. Hence, an SRE bulk is possible in open quantum systems.
Now we present an example of a 3-dimensional system with a 2-foliated subsystem symmetry G_s=ℤ_2 and global fermion parity conservation G_g=ℤ_2^f. Each building block in the coupled-wire model (Fig. <ref>) corresponds to the edge theory of two copies of p± ip superconductor (or the edge of ℤ_8-classified fermionic Levin-Gu state in (2+1)d with topological index ν=2). The building block can be described by a Luttinger liquid with a K-matrix K=σ^z and ℤ_2 and the ℤ_2^f symmetries actions as the following
_2: W^_2=1_2×2, δϕ^_2=π(0,1)^T
_2^f: W^_2^f=1_2×2, δϕ^_2=π(1,1)^T
.
In each unit cell, there are 4 building blocks with an alternating anomaly pattern such that the total anomaly is trivial. Therefore, each unit cell can emerge in a clean system as an SRE pure state.
Now in order to get a nontrivial state, we want to obtain an SRE bulk by symmetric interaction and/or decoherence between the building blocks within one inter-unit-cell plaquette. The Luttinger liquid theory describing the modes inside the inter-unit-cell plaquette in Fig. <ref> has the K-matrix K=(σ^z)^⊕4. And there are four _2 subsystem symmetries acting on these modes with the following nontrivial actions
_2^1: W^g_1=1_8×8
δϕ^g_1=π(1,0,0,1,0,0,0,0)^T
_2^2: W^g_2=1_8×8
δϕ^g_2=π(0,0,0,0,0,1,1,0)^T
_2^3: W^g_3=1_8×8
δϕ^g_3=π(1,0,0,0,0,1,0,0)^T
_2^4: W^g_4=1_8×8
δϕ^g_4=π(0,0,0,1,0,0,1,0)^T
.
We also demand a global fermion parity conservation
_2^f: W^_2^f=1_8×8
δϕ^_2^f=π(1,1,1,1,1,1,1,1)^T
.
We can first check that these modes cannot be gapped out by symmetric interaction in the clean limit by calculating the anomaly of these symmetries. An anomaly indicator to detect the _2 symmetry anomaly is demonstrated in Ref. <cit.>: for a Luttinger liquid with _2 symmetries (unitary or anti-unitary), one can always put the K-matrix and symmetry actions into the following canonical forms
K=(
[ A 0 B -B; 0 C D D; B^T D^T E F; -B^T D^T F^T E ]),
W=(
[ -1_n_–m 0 0 0; 0 1_n_+-m 0 0; 0 0 0 1_m; 0 0 1_m 0 ]),
δϕ=(
[ 0; χ_2; 0; 0 ]),
where 1_m is an m× m identity matrix, n_-, m, and n_+ are non-negative integers satisfying n_++n_-=N and m≤ n_±. An auxiliary vector χ_+ is further defined as
χ_+=(
[ 0; χ_2+2a; diag(E+F)/2+b; diag(E+F)/2+b ]), ∀ a, b∈_2.
The anomaly indicator ν is defined by
ν≡1/2χ_+^TK^-1χ_++1/4sig(K(1-W)) (mod 2),
where “sig” denotes the signature of the matrix. The anomaly-free criterion of (K, W, δϕ) based on the anomaly indicator ν is
ν=0 (mod 2).
For _2 subsystem symmetries in Eq. (<ref>), one can show the anomaly indicator of the operations g_1g_3, g_2g_4, g_1g_4, and g_2g_3 are non-vanishing, as
ν_g_1g_3=ν_g_2g_4=1(mod 2)
ν_g_1g_4=ν_g_2g_3=1(mod 2)
.
This implies that the modes in each plaquette exhibit subsystem symmetry anomaly, indicating that the coupled-wire construction described above is obstructed to have a gapped bulk in the clean limit by symmetric interaction due to the failure of the anomaly cancellation condition. Eq. (<ref>) also indicates that the anomaly is bosonic <cit.>. In other words, by introducing suitable symmetric interaction to each plaquette, the 8-component Luttinger liquid can be transformed into a 2-component boson field with a K-matrix K=σ^x and the ℤ_2 symmetry transformation W^ℤ_2=1_2×2 and δϕ^ℤ_2=π(1,1), which is precisely the boundary of the (2+1)d bosonic ℤ_2 SPT, i.e., the Levin-Gu state <cit.>.
Next, we consider the system subject to decoherence, which breaks down the ℤ_2 subsystem symmetry to an average ℤ_2 subsystem symmetry. In this situation, the anomalous bosonic modes in each plaquette can actually be turned into an SRE mixed state, because the anomaly becomes trivial when the ℤ_2 symmetry is broken down to average. This statement is synonymous with the statement that, with only average ℤ_2 symmetry, there is no nontrivial bosonic ASPT in (2+1)d. In particular, we discuss more details in Sec <ref> that both decoherence and dissipation can drive the Luttinger liquid in each plaquette toward a mixed state with short-range entanglement. This leads to the formation of an SRE bulk mixed state.
Subsequently, we turn our attention to the hinge mode. As we mentioned above, the hinge mode, which is also the building block of the wire construction, corresponds to the edge state of a (2+1)d fermionic Levin-Gu state with a topological index ν=2 with symmetry actions specified in Eq. (<ref>). We can introduce decoherence processes that break down the ℤ_2 symmetry to an average ℤ_2 symmetry. However, according to the general classification paradigm in Ref. <cit.>, (2+1)d fermionic Levin-Gu state with topological index ν=2 is still a nontrivial ASPT if we break _2 symmetry to an average symmetry by some decoherence channel and keep the fermion parity _2^f exact. Therefore, the hinge mode, which is an edge of the ASPT mentioned above, still carries an average anomaly and cannot be turned into an SRE mixed state. The stability of the hinge modes shows that the system is a nontrivial type- intrinsic SASPT.
§.§.§ Type- intrinsic SASPT from average anomalous building blocks
In this subsection, our focus is on constructing intrinsic SASPT states whose building blocks do not have a clean limit. In other words, each building block is the boundary of a certain intrinsic ASPT state.
As an example, let us consider the coupled-wire model with 2-foliated subsystem fermion parity symmetry G_s = ℤ_2^f and global time-reversal symmetry G_g = ℤ_2^T with (ℤ_2^T
)^2=1. In our previous discussion of closed systems in Sec. <ref>, each building block in Fig. <ref> is expected to be an edge mode of a (2+1)d ℤ_2^f ×ℤ_2^T SPT state. However, the classification of (2+1)d ℤ_2^f ×ℤ_2^T SPT in pure states is trivial <cit.>, which means there are no nontrivial clean SSPT states in this symmetry class.
Now, let us consider open systems where decoherence breaks the time-reversal symmetry, transforming it into an average symmetry. In such settings, the classification of (2+1)d ASPT with exact ℤ_2^f and average ℤ_2^T is actually nontrivial and all the nontrivial cases are intrinsic ASPTs (see Appendix <ref>). In this case, the coupled-wire model can be composed of the building block of (1+1)d mixed states with exact fermion parity ℤ_2^f and average time-reversal ℤ_2^T symmetries which are the edges of the (2+1)d intrinsic ASPTs. These building blocks cannot be SRE mixed states, indicating the presence of average anomalies.
Here we provide more details on the classification data for the intrinsic ASPTs with exact ℤ_2^f and average ℤ_2^T symmetry. There are three layers of the classification data:
n_1∈ℋ^1[_2^T,h^2(_2^f)]=_2
n_2∈ℋ^2[_2^T,h^1(_2^f)]=_2
ν_3∈ℋ^3[_2^T,U(1)]=_1
,
where n_1 labels the Majorana chain decoration on the time-reversal domain walls, n_2 labels the complex fermion decoration on the junctions of time-reversal domain walls, and ν_3 represents the bosonic SPT state protected by ℤ_2^T. In order to get a consistent clean SPT state, these data must satisfy the following consistency conditions (see Appendix <ref> for more details):
d_2n_1=s_1∪ n_1∪ n_1
d_2n_2=𝒪_4[n_2]
,
In the clean case, both n_1 and n_2 encounter obstructions, which is the reason for the absence of clean SPT states. However, in the context of open quantum systems, the 𝒪_4[n_2] obstruction is trivial due to the phase decoherence of open systems. Consequently, a nontrivial n_2 characterizes a (2+1)d intrinsic fermionic ASPT phase. And this intrinsic ASPT has ℤ_2 classification because nontrivial n_2 is actually the only intrinsic ASPT state for this symmetry class. The reason is nontrivial n_1 will lead to an obstruction of fermion parity violation, which is not allowed as long as ℤ_2^f is still an exact symmetry in the decohered systems <cit.>. As a result, we can set each building block depicted in Fig. <ref> to be the edge of the (2+1)d intrinsic ASPT state with exact ℤ_2^f and average ℤ_2^T symmetry.
In order to achieve an SRE bulk state, we need to argue that four building blocks from the four neighboring unit cells can be turned into an SRE ensemble using quantum decoherence (as shown in Fig. <ref>). This is equivalent to saying that the total average anomaly vanishes for these modes. Examining the configuration, we observe the following charge assignments for the subsystem symmetries:
* Wire-1 carries nontrivial charges associated with subsystem fermion parities ℤ_2,x_2^f and ℤ_2,y_1^f.
* Wire-2 carries nontrivial charges associated with subsystem fermion parity ℤ_2,x_2^f and ℤ_2,y_2^f.
* Wire-3 carries nontrivial charges associated with the subsystem fermion parity ℤ_2,y_1^f and ℤ_2,x_1^f.
* Wire-4 does not carry any charges associated with the subsystem fermion parities ℤ_2,x_2^f and ℤ_2,y_1^f.
To obtain an SRE ensemble from these four building blocks is to show that the average anomaly associated with all the symmetries is trivial. Let us consider the two subsystem fermion parities ℤ_2,x_2^f and ℤ_2,y_1^f, we can summarize the average anomalies of different wires as follows:
wire-1: n_2^x_2(g_1,g_2)+n_2^y_1(g_1,g_2)
wire-2: n_2^x_2(g_1,g_2)
wire-3: n_2^y_1(g_1,g_2)
wire-4: 0
,
where n_2^x_2(g_1,g_2)∈ℋ^2[_2^T,h^1(_2,x_2^f)]=_2 and n_2^y_1(g_1,g_2)∈ℋ^2[_2^T,h^1(_2,y_1^f)]=_2. Due to the ℤ_2 nature of these anomalies, it is easy to see that the anomaly associated with these two symmetries is vanishing when we take all four wires into consideration. Similarly, one can easily check the total average anomaly of the four building blocks is trivial, guaranteeing the existence of an SRE bulk ensemble. Given the SRE bulk, from a similar construction as before, one can easily see that the hinge is a nontrivial mixed state which carries precisely the average anomaly of the edge of the (2+1)d intrinsic ASPT state with exact ℤ_2^f and average ℤ_2^T symmetry. Thus, this system is a nontrivial intrinsic SASPT.
A more explicit coupled-wire model (with some assumptions on the time reversal action) of this type- intrinsic SASPT state is given in Appendix <ref> where we have used the doubled Hilbert space language to demonstrate this nontrivial state. In particular, the explicit decoherence channels needed to construct this state are provided in Appendix <ref>.
§ CLASSIFICATION SCHEME FOR THE SASPT PHASES
In the previous section, we have demonstrated a few possibilities of SASPT phases in (3+1)d open quantum systems with 2-foliated subsystem symmetries. In particular, we discussed two distinct types of intrinsic SASPT states that cannot be realized as SSPT states in clean or closed quantum systems. In this section, we focus on the general classification scheme for SASPT phases. This classification scheme relies on the cancellation condition of average anomaly, similar to Eq. (<ref>), within the framework of coupled-wire constructions.
§.§ Classification by anomaly cancellation
We discuss the general classification paradigm of SASPT phases based on the cancellation condition of an average anomaly in the coupled-wire models. To discuss the average anomaly, we recall the classification of ASPT with onsite symmetry. According to Ref. <cit.>, we know that, for onsite symmetry, the G̃-symmetric decohered ASPT density matrices ρ are classified by the AH spectral sequence with modified data and differentials. The topological invariant of ρ in (d+1)d is a (p+q)-cocycle as an element of the E_2 page of the AH spectral sequence, as
ν_p+q({g};{k})∈ E_2^p,q=^̋p[G,h^q(K)],
where p+q=d+1, q>0, and (g_i, k_j)∈(G, K) are group elements of average and exact symmetries, respectively. Each element in Eq. (<ref>) labels the average anomaly of a d-dimensional mixed state which is the boundary of the (d+1)-dimensional ASPT state. These average anomalous boundary systems will be our building blocks for the SASPT states. We note that our scheme is also applicable to fermionic systems, of which the fermion parity _2^f should be included as a subgroup of K.
Towards an SRE bulk state, the total average anomaly per plaquette (see Figs. <ref> and <ref>) should be canceled. Therefore, similar to Eq. (<ref>), for a 2-foliated subsystem average symmetry G̃_s that is extended from an exact subsystem symmetry K_s and an average subsystem symmetry G_s, the anomaly cancellation map f̃_2 is modified to be
f̃_2(ν)({g,g'};{k,k'})=ν({g};{k})ν({g'};{k'})/ν({gg'};{kk'}).
To have average anomaly cancellation, we need the image of this map to be trivial. We note that there is an important difference between the average anomaly cancellation condition through f̃_2 [cf. Eq. (<ref>)] and that of the f_2 map [cf Eq. (<ref>)]. ν is the collection of all average obstruction-free elements in E_2^p,q=^̋p[G_s, h^q(K_s)] with q>0, as the topological invariant of (2+1)d ASPT phases. The total average anomaly cancellation condition is that image of the f̃_2 map falls into the trivial elements in E_2^p,q=^̋p[G_s^2, h^q(K_s^2)] with q>0 or any element in ^̋p+q[G_s^2, U(1)]. This is because there are no nontrivial ASPT states if there is no exact symmetry.
§.§ The case for intrinsic SASPT phases
In the previous subsection, we established the cancellation of the average anomaly as the classification principle for SASPT phases, as expressed by Eq. (<ref>). This cancellation bears a resemblance to the anomaly cancellation observed in closed systems. However, there is a key distinction between the f_2 map (<ref>) and the f̃_2 map (<ref>) in terms of their preimage and image groups, which leads to different classifications and intrinsic SASPTs. In Sections <ref> and <ref>, we presented two examples of intrinsic SASPT phases: the type- and type- SASPT phases. In this subsection, our objective is to see how they fit into the general classification.
Type- intrinsic SASPT phases are defined by building blocks that correspond to the d-dimensional edge state of a (d+1)-dimensional clean SPT state. However, the “gapping" problem of the bulk needs the assistance of decoherence. In this case, although it is not possible to find a symmetric interaction that fully gaps out each plaquette in the clean systems, by introducing all possible symmetric interactions, the modes in each plaquette can be deformed to the edge state of a (d+1)-dimensional SPT state labeled by an element of ℋ^3[G_s^2, U(1)]. Under decoherence of the G_s degrees of freedom, the modes in each plaquette can be decohered to an SRE (1+1)d mixed state. Therefore, the classification of type- intrinsic SASPT phases includes the elements in Eq. (<ref>) that label the ASPT phases in (2+1)d with clean limits, and under the f̃_2 map (<ref>) these elements are mapped to elements in ^̋p+q[G_s^2, U(1)].
In the type- intrinsic SASPT phases, each building block corresponds to the edge state of a higher dimensional intrinsic ASPT state <cit.>. Under the f̃_2 map (<ref>), if building blocks are mapped to trivial elements in E_2^p,q=^̋p[G_s^2, h^q(K_s^2)] (q>0) or any element in ^̋p+q[G_s^2, U(1)], one can find a consistent type-II intrinsic SASPT.
§ EFFECTS OF DECOHERENCE OR DISSIPATION ON THE HINGE MODES
In the previous section, we have discussed the classification of nontrivial SASPTs. We noted the importance of exact symmetries without which there are no robust SASPT phases. This means that, when all the symmetry of the system is broken down to average by decoherence, the hinge modes in principle can become SRE mixed state. However, in some cases, it is possible that the hinge mode is stable in some weak decoherence regime and eventually becomes SRE after some critical decoherence strength. In other words, the hinge mode could go through a dynamical transition driven by decoherence or dissipation.
Before going to the example, let us introduce the formalism to describe the dynamics of open quantum systems. A huge class of out-of-equilibrium quantum dynamics of open systems can be described by the Markovian quantum master equation, which reads<cit.>
∂_tρ=Łρ=-i[H,ρ]+∑_αγ_α(L_α^†ρ L_α-1/2{L_α^† L_α, ρ}).
Here the operator Ł is called Liouville super-operator and acts on the density matrix ρ from both sides. The quantum jump operators L_α describe the coupling between systems and environment (bath). The non-negative number γ_α depicts the intensity of quantum jumps.
The Lindbladian evolution can include many different types of non-equilibrium dynamics. For instance, the Lindbladian L_α can capture situations where the system is (weakly) measured by the environment at a certain instance or a finite time interval, namely L_α(t)∼δ(t-t_0). In our paper, we just refer to this type of quantum dynamics as measurement or decoherence. The Linbladian evolution can also describe a system that is constantly interacting with the outside environment, which in our paper is referred to as dissipation dynamics.
We will now focus on a specific example in (3+1)d with a 2-foliated _2 subsystem symmetry. According to a study by Ref. <cit.>, this system can host a clean SSPT state whose hinge mode is expected to exhibit characteristics of the edge state of the (2+1)d Levin-Gu model <cit.>. The hinge mode can be described by
𝒮_ LG[θ,ϕ]= ∫dxdτ1/4π(∂_xθ∂_τϕ+∂_xϕ∂_τθ)
-1/8π[K_0(∂_xθ)^2+4/K_0(∂_xϕ)^2],
where K_0 is the Luttinger parameter and _2 symmetry acts as (θ,ϕ)↦(θ+π,ϕ+π). If we consider decoherence or dissipation that breaks the ℤ_2 subsystem symmetry down to average, there is no nontrivial SASPT in this case, which implies that the hinge mode can become SRE mixed state under decoherence or dissipation. However, small decoherence can be relevant or irrelevant depending on the value of the Luttinger parameter, and there could be an interesting transition on the hinge.
A good way to handle the dynamical phase transition is to use the Choi-Jamiołkowski isomorphism, which maps the density matrix of the (1+1)d wire to a pure state in the doubled Hilbert space, denoted by ℋ_d. Within the doubled space, we can transform the dynamical phase transition of the density matrix to the quantum phase transition of clean systems with some additional care.
§.§ Effects of measurement/decoherence
We use the Choi–Jamiołkowski isomorphism to investigate the influence of measurement/decoherence on the hinge state. We consider the quantum channels which are (weak) measurement/decoherence that breaks the ℤ_2 exact symmetry down to average. We use renormalization group (RG) analysis in the doubled Hilbert space to study the effect of such measurement/decoherence. By Choi–Jamiołkowski isomorphism, the free part of the hinge theory in doubled Hilbert space is composed of two copies of Levin-Gu edge theory (<ref>) which reads
𝒮_ LG^l[θ_l,ϕ_l]-𝒮_ LG^r[θ_r,ϕ_r].
We can view this theory as generated by a doubled space path integral in imaginary time.
At low energy, the weak measurement/decoherence which breaks the ℤ_2 symmetry to average will be mapped to a coupling between l and r degrees of freedom at a given time <cit.>. The simplest form of such coupling in low energy can be written as
S_ϕ[φ_l,φ_r]=-∫dx∑_ϵ=±μ_ϵcos(φ_l+ϵφ_r)
S_θ[ϑ_l,ϑ_r]=-∫dx∑_ϵ=±α_ϵcos(ϑ_l+ϵϑ_r)
,
where φ_l/r(x)=ϕ_l/r(x,τ=0) and ϑ_l/r(x)=θ_l/r(x,τ=0). It is easy to see that S_ϕ and S_θ break the _2,l and _2,r symmetries to the diagonal _2 symmetry acting on _̋l and _̋r identically. In practice, there will be other kinds of perturbation that break the exact symmetry down to average. The above terms are conceivably the most relevant terms that can be generated by Kraus operators.
Similar to the analysis in Ref. <cit.>, we perform a Wick rotation that exchanges the spatial coordinate x with the imaginary time coordinate τ. In the new coordinates, the weak measurements on (<ref>) can be interpreted as a local coupling at x=0 that is constant along the imaginary time direction. The renormalization group (RG) equations for S_ϕ and S_θ take the form of RG equations for (0+1)d static impurities in Luttinger liquids, which can be described as follows (ϵ=±):
dμ_ϵ/dl=(1-K_0/2)μ_ϵ
dα_ϵ/dl=(1-2/K_0)α_ϵ
dK_0/dl=0
.
As illustrated in Fig. <ref>, for K_0<2, S_ϕ is relevant, otherwise S_θ is relevant in the infrared (IR) limit. When S_ϕ is relevant (namely K_0<2), one can show that the power-law correlation of ⟨ e^iθ(r_1)e^-iθ(r_2)⟩ is spoiled. However, since the ϕ fields are commuting with the perturbation S_ϕ, the correlation function ⟨ e^iϕ(r_1)e^-iϕ(r_2)⟩ remains the same as before measurement/decoherence, namely power-law decay. As for K_0>2, the situation is opposite, namely e^iθ operators show nontrivial correlations. Therefore, in the weak measurement/decoherence regime, the nontrivial correlation on the hinge persists for any value of the Luttinger parameter K_0. However, we emphasize that the above are only analyses in the weak measurement/decoherence limit. If both terms in Eq. (<ref>) are strong (say we only take the ϵ=+1 terms), then they can pin the ϕ and θ fields and their connected correlations become trivial, which is consistent with the statement of a trivial bulk.
§.§ Effects of dissipation
To study the effect of dissipation, we employ the Keldysh path integral formalism. In Keldysh, the Lindbladian Ł is expressed as
Ł=-i(H_l-H_r)
+ ∑_αγ_α[L_α,lL_α,r^*-1/2(L_α,l^*L_α,l+L_α,r^*L_α,r)],
where H_l/r describe the unitary dynamics and L_l,r contains the effects of dissipation.
In our case, the unitary dynamics for the hinge mode is given by Eq. (<ref>) (with imaginary time switched back to real time). The dissipation process in the low energy effective theory is expressed as couplings between the quantum fields on the left and right branches, which break the two individual ℤ_2 symmetry on the left and right branches to a diagonal ℤ_2 symmetry. We again consider the simplest possibility as the following
Ł_ϕ=-∑_ϵ=±μ_ϵcos(ϕ_l+ϵϕ_r)
Ł_θ=-∑_ϵ=±α_ϵcos(θ_l+ϵθ_r)
.
Note that these terms do not come with a δ function in time, i.e., the dissipation process happens continuously in time. We can regard these terms as coming from continuous weak measurements done by the environment on the system.
It is easy to verify that the scaling dimensions of the ratios μ_-/μ_+ and α_-/α_+ vanish, namely, μ_-/μ_+ and α_-/α_+ do not flow under the renormalization group. To simplify our analysis, we choose a submanifold in the parameter space with μ_-=α_-=0. It is easy to see that this submanifold is closed under RG flow. The RG equations for the coupling constants μ_+, α_+, and the Luttinger parameter K_0 are given by <cit.>
dμ_+/dl=(2-K_0/2)μ_+
dα_+/dl=(2-2/K_0)α_+
dK_0/dl=-1/4α_+^2+1/16K_0^2μ_+^2
.
Notice the RG flow for the Luttinger parameter differs by a sign from the usual RG equation in the clean systems. This is due to the non-hermitian nature of the dissipation terms. It is hard to determine the full flow of the RG equation. Due to the perturbative nature of the RG equations, we will treat K as a free parameter and only look at the relevance of the μ_+ and α_+ terms. There are three different regimes as the following,
* K_0<1: Ł_ϕ is relevant while Ł_θ is irrelevant. This region corresponds to a mixed hinge state with the power-law correlation of ⟨ e^iϕ_l/r(r_1)e^-iϕ_l/r(r_2)⟩ (because the dissipation commutes with the operator e^-iϕ_l/r(r)) and short-range correlation of ⟨ e^iθ_l/r(r_1)e^-iθ_l/m(r_2)⟩.
* 1<K_0<4: both Ł_ϕ and Ł_θ are relevant. This region corresponds to a mixed state with short-range correlation for all operators, i.e., a trivial state.
* K_0>4: Ł_θ is relevant while Ł_ϕ is irrelevant. This region corresponds to a mixed hinge state with the power-law correlation of ⟨ e^iθ_l/r(r_1)e^-iθ_l/r(r_2)⟩ and short-range correlation of ⟨ e^iϕ_l/r(r_1)e^-iϕ_l/r(r_2)⟩.
Therefore, in the intermediate regime of the phase diagram, the power-law correlations of the hinge mode are destroyed by the dissipation.
§ CONCLUSION AND OUTLOOK
In this work, we investigate the construction and classification of fractonic higher-order topological phases in open quantum systems with exact and average symmetries. Our analysis focuses on scenarios of (3+1)d systems with a 2-foliated subsystem symmetry. In particular, we demonstrate two types of intrinsic SASPT phases that cannot exist in the pure state. For type- intrinsic SASPT phases, a trivial atomic insulator exists in the pure state. However, the existence of the nontrivial SPT state requires the help of quantum decoherence. For type- intrinsic SASPT, in some sense, even the existence of the trivial atomic insulator requires decoherence. Throughout the discussion, the notion of average anomaly (which describes the boundary of an average SPT state) plays an important role, and we use this idea to provide a general classification of these SASPT states. It is worth noting that the average anomaly-free condition for SASPT phases deviates slightly from that of SSPT phases in closed (clean) systems. Specifically, we consider all elements in H^d+1[G_s^2, U(1)] to be trivial since average anomalies do not arise in systems with average symmetry only <cit.>. We also study the effects of measurement or dissipation on the hinge modes of fractonic higher-order topological phases. Examples of interesting dynamical phase transitions are discussed.
The investigation of symmetry and topology in the presence of decoherence and dissipation holds significant importance and interest in the interdisciplinary realm of quantum many-body physics and quantum information. The insights gained from studying the modification of consistency conditions for average symmetry in dissipative foliated systems can be extended to systems with higher-form symmetries and intrinsic topological orders. Furthermore, in addition to decoherence and dissipation, the notion of average symmetry is also very relevant to disordered systems. However, fractonic phases with the disorder could be much richer in terms of the dynamical properties of their excitations. On the one hand, disorders can have the effect of generating mobility for the fractonic excitations since the disorder breaks subsystem symmetries. On the other hand, with strong disorder, systems are also expected to be localized near the ground state. These competing effects indicate that phase diagrams of fractonic phases as a function of symmetry-breaking disorder strength can be very complex and worth a careful study in the future.
We thank Ruochen Ma, Yichen Xu, and Chong Wang for stimulating discussions and Meng Cheng for a previous collaboration. JHZ and ZB are supported by a startup fund from the Pennsylvania State University and thank the hospitality of the Kavli Institute for Theoretical Physics, which is partially supported by the National Science Foundation under Grant No. NSF PHY-1748958. JHZ appreciates the hospitality of Zheng-Cheng Gu at the Chinese University of Hong Kong, where this work is partially finished; ZB is also grateful to the hospitality of the Institute for Advanced Study at Tsinghua University, where this work is partially completed. KD and SY are supported by the National Natural Science Foundation of China (NSFC) (Grant No. 12174214 and No. 92065205), the National Key R&D Program of China (Grant No. 2018YFA0306504), the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302100), and the Tsinghua University Initiative Scientific Research Program.
§ A BRIEF REVIEW OF ASPT CLASSIFICATION
The formal classification of ASPT phases with global symmetry can be done through the generalized spectral sequence method. Consider the general group extension,
1→ K→G̃→ G→1,
with G being the average symmetry group and K being the exact symmetry group (for fermionic systems, the fermion parity _2^f should be included as a subgroup of K). Mathematically, the consistency conditions for decorated domain walls with symmetry G̃ are consolidated into an Atiyah-Hirzebruch (AH) spectral sequence. All possible decorated domain wall patterns are summarized as the so-called E_2 page of this spectral sequence, and it is given by:
⊕_p+q=d+1E_2^p,q=⊕_p+q=d+1H^p[G, h^q(K)].
In the above equation, h^q(K) represents the classification of K-symmetric invertible topological phases in q dimensions, which are decorated on the defects of G symmetry with a codimension of q. It is important to note the following modifications, which are distinct from the ordinary spectral sequence for classifying the SPT phases in the clean systems:
* h^0(K)=0 because there is no nontrivial ASPT state if there is no exact symmetry.
* Bosonic invertible topological phases should be excluded from h^q(K) (for example, (2+1)d Kitaev's E_8 state is excluded) – this is because such states can be prepared by a finite-depth quantum channel from a trivial product state <cit.>.
As mentioned above, not all domain wall configurations can give rise to nontrivial SPT states, as certain consistency conditions need to be satisfied during the construction of an SPT wave function in clean systems. Within the framework of the AH spectral sequence, the consistency conditions are captured by the differentials denoted as d_r, which map elements from E_2^p,q to decorated domain wall configurations in E_2^p+r,q-r+1 in one higher dimension, namely
d_r: E_2^p,q→ E_2^p+r,q-r+1.
These consistency conditions ensure that the symmetry defect of G symmetry can quantum fluctuate in a wavefunction while keeping SRE properties <cit.>. In particular, the final layer of the differential d_q+1 ensures that no Berry phase is accumulated after a closed path of continuous domain wall deformation. For open quantum systems, the Berry phases of different decorated domain wall patterns are no longer well-defined. Therefore, we abandon the Berry phase consistency condition for open quantum systems. Mathematically, we delete the last layer of obstruction d_q+1 in Eq. (<ref>) when calculating the classification of ASPT phases in open quantum systems.
For more details about the Atiyah-Hirzebruch (AH) and Lyndon-Hochschild-Serre (LHS) spectral sequences, see Refs. <cit.>. In the following, we present a simple example to sketch the spectral sequence of calculating the ASPT classification.
§.§ _2^T×_2^f fSPT phases in (2+1)d
In this subsection, we give an example of classification of the (2+1)d fSPT phases with _2^T×_2^f by AH spectral sequence <cit.>. We list all possible classification data as the elements of E_2-page of AH spectral sequence, as
* n_1∈ E_2^1,2=H^1[_2^T, h^2(_2^f)]=_2: Kitaev's Majorana chain decoration on the codimension-1 _2^T domain wall, where h^2(_2^f)=_2 is the classification of (1+1)d Kitaev's Majorana chain.
* n_2∈ E_2^2,1=H^2[_2^T, h^1(_2^f)]=_2: Complex fermion decoration on the codimension-2 _2^T domain wall junction, where h^1(_2^f)=_2 is the classification of complex fermion parity.
* ν_3∈ E_2^3,0=H^3[_2^T, U(1)]=_1.
And the differentials are defined as
d_2n_1=s_1∪ n_1∪ n_1
d_3n_1=0
d_2n_2=𝒪_4[n_2]
,
where s_1 characterizes the anti-unitary elements, such that
s_1(g)={ 1, g is anti-unitary
0, g is unitary.,
and 𝒪_4[n_2] is the obstruction function of the Berry phase consistency, as a function of n_2 <cit.>. In clean systems, all possible decorated domain wall patterns are obstructed toward a trivial classification. In open quantum systems with an average _2^T symmetry and exact _2^f symmetry, the Berry phase consistency would be lifted because of the decoherence; as a consequence, the nontrivial n_2 data corresponds to an intrinsic ASPT state. On the other hand, n_1 is still obstructed. Therefore, the eventual classification for ASPT in this symmetry class is ℤ_2.
§ CHOI–JAMIOŁKOWSKI ISOMORPHISM
We describe some more details on the Choi–Jamiołkowski isomorphism and the mapping between ASPT density matrices and SPT states in the doubled space. An ASPT density matrix, denoted by
ρ=∑_jp_j|ψ_j⟩⟨ψ_j|,
defined on the Hilbert space $̋ will be mapped to the following Choi state in the doubled Hilbert space_̋d=_̋l⊗_̋runder the Choi–Jamiołkowski isomorphism,
|ρ=1/√(dim(ρ))∑_jp_j|ψ_j⟩⊗|ψ_j^*⟩,
where both left Hilbert space_̋land right Hilbert space_̋rare identical with the physical Hilbert space$̋. The doubled Hilbert space comes with a larger symmetry group.
The exact symmetry K is “doubled" in the doubled space to K_l× K_r symmetry, namely
U_k,l|ρ=1/√(dim(ρ))∑_jp_j(U_k|ψ_j⟩)⊗|ψ_j^*⟩
U_k,r|ρ=1/√(dim(ρ))∑_jp_j|ψ_j⟩⊗(U_k^*|ψ_j^*⟩)
.
The average symmetry G is mapped to G_d symmetry in _̋d, such that
U_g,d|ρ=1/√(dim(ρ))∑_jp_j(U_g|ψ_j⟩)⊗(U_g^*|ψ_j^*⟩).
There is also an anti-unitary “SWAP^*” symmetry as
SWAP^*≡𝒞∘SWAP,
where 𝒞 is the complex conjugation, and SWAP symmetry exchanges _̋l and _̋r.
Therefore, a G̃-symmetric (<ref>) density matrix ρ is mapped to a G̃_d⋊SWAP^*-symmetric quantum state |ρ, where G̃_d is characterized by
1→ K_l× K_r→G̃_d→ G_d→1.
Measurements of decoherence can be, in general described by some local quantum channels ,
[ρ]=_1∘_2∘⋯∘_N[ ρ ]=∑_j=1,k^NK_j,kρ K_j,k^†,
where K_j,k's are the local Kraus operators on site-j, satisfying the condition
∑_kK_j,k^† K_j,k=1, ∀ j=1,⋯,N.
In the doubled Hilbert space ℋ_d, the quantum channel (<ref>) in $̋ is mapped to the following (non-unitary in general) operator in_̋d,
_j=∑_kK_k,l⊗ K_k,r^*.
Physically, the quantum channels in the doubled space act as an interaction between the left and right space. Specifically, the interactions are symmetric underK_l×K_rbut breakG_l×G_rdown to the diagonalG_dsymmetry. With the Choi–Jamiołkowski isomorphism, we are ready to formulate the description of average anomalies as the 't Hooft anomalies of enlarged symmetry in the doubled Hilbert space_̋d. Suppose the density matrix in Eq. <ref> describes an ASPT state, then each component of the density matrix is described by a certain consistent domain wall decoration pattern which is labeled byν_p+q(g_1,⋯,g_p;k_p+1,⋯,k_p+q). It can be shown that the Choi state|ρof the density matrixρis an SPT wavefunction in_̋d, with the following topological invariant
ω_p+qν_p+q(g_1,⋯,g_p;k_p+1,l,⋯,k_p+q,l)/ν_p+q(g_1,⋯,g_p;k_p+1,r,⋯,k_p+q,r).
This topological invariant can be demonstrated to be a cocycle in^̋p[G_d, h^q(K_l×K_r)], wherek_i,l∈K_l,k_i,r∈K_randg_j∈G_d. Hence, it represents an SPT wavefunction in the doubled space.
In particular, this mapping also encompasses the intrinsic ASPTs – we can demonstrate that the density matrix of an intrinsic ASPT state will also be mapped to an SPT state in the doubled Hilbert space. One can understand this by the following arguments. Consider the topological invariant of aG̃-symmetric intrinsic ASPT stateν_p+q∈E_2^p,q, which under the differentiald_q+1is mapped to a nontrivial elementν_p+q+1∈E_2^p+q+1,0in one higher dimension – meaning this particular decorated domain wall pattern is obstructed in a clean system. This obstruction is described by a nontrivial cocycleν_p+q+1∈H^p+q+1[G, U(1)]which indicates an inconsistent Berry phase accumulation along a closed path of deformation ofGdomain walls. We know such a decoration pattern can be consistent in the mixed state, and it corresponds to an intrinsic ASPT state. On the other hand, for the corresponding Choi state, the total Berry phase would beν_p+q+1ν_p+q+1^*, which can be shown to automatically fall into the trivial element inH^p+q+1[G, U(1)]. Hence the Choi state|ρis obstruction-free in doubled Hilbert space. One can also show that the cocycle in Eq. (<ref>) represents a nontrivial SPT in doubled space.
§ COUPLED-WIRE MODEL OF TYPE- INTRINSIC SASPT
In this section, we present a conjecture of a coupled-wire model in doubled Hilbert space for the type- intrinsic SASPT with exact subsystemℤ_2^fsymmetries and average time reversal symmetryℤ_2^Tgiven in Sec. <ref>.
In the wire construction, we use the building blocks, which are the edge of an intrinsic ASPT with exactℤ_2^fand averageℤ_2^Tsymmetry. Unfortunately, we currently do not have a first principle way to determine the boundary theory of an intrinsic ASPT. However, we know some requirements in the doubled space are needed for this construction. First of all, the exact fermion parity symmetry for the left and right space must factorize. Second, since the theory is supposed to be the boundary of intrinsic ASPT, therefore, one should not be able to factorize the time reversal action into actions in each individual subspace. Otherwise, the mixed state will have a clean limit and hence not be intrinsic. Of course, the average time reversal symmetry should commute with the swap symmetry. The final requirement is that the theory in doubled space is anomalous, given these symmetry assignments. With these requirements, there might not be a unique choice of edge theory. Nonetheless, in the following, we give one example of construction that satisfies the above requirements. We conjecture that this theory can be an edge theory of the intrinsic ASPT with exactℤ_2^fand averageℤ_2^Tsymmetry.
One such theory in the doubled space is a Luttinger liquid with a 4-component boson field, with theK-matrixK=σ_l^z⊕σ_r^z, where the two blocks correspond to the left and right spaces. The fermion parities in the Hilbert spaces_̋land_̋rspaces are uniquely given by
W^P_f^l=1_4×4, δϕ^P_f^l=π(1,1,0,0)^T
W^P_f^r=1_4×4, δϕ^P_f^r=π(0,0,1,1)^T
,
and the SWAP^*symmetry is uniquely defined as
W^S=[ 0 σ^x; σ^x 0 ], δϕ^S=0.
The time-reversal symmetry transformation is tricky. We need a transformation matrix that is not factorizable in the left and right spaces. The transformation matrix should be commuting withW^S. And finally, the time-reversal symmetry should have an anomaly that manifests the decorated domain wall picture (i.e., a time-reversal domain wall decorated by two complex fermions, one is from the left Hilbert space and the other is from the right Hilbert space). By brute-force search, we find such a time reversal action that satisfies all these requirements as the following,
W^=[ 0 1 -1 -1; 1 0 1 1; 1 1 0 1; -1 -1 1 0 ], δϕ^=([ 0; π; π; 0 ]).
At the outset, one should check theW^commute with the two fermion parities and the swap symmetry, and^2=1, which is consistent with our symmetry action assignment. The tricky part is to show the mixed anomaly between the time-reversal symmetry and the two fermion parities. First, one can show that there is no gapping term one can turn on to get rid of these modes without breaking symmetry either explicitly or spontaneously. In particular, any term with the following form is not compatible with the time-reversal symmetry (<ref>):
cos(aϕ_1+bϕ_2+cϕ_3+dϕ_4+φ), a,b,c,d∈, φ∈ [0,2π)
This indicates that indeed these symmetries are anomalous. But to more precisely demonstrate the anomaly, one way to do it is to explicitly break the time-reversal symmetry by some order parameter and show that there are nontrivial fermion zero modes (one from the left sector and one from the right sector) localized at the domain wall of this order parameter. To that end, we can consider the following time-reversal order parameters,
H_TB=m(x)cos(ϕ_1+ϕ_2)+m(x)cos(ϕ_3+ϕ_4),
It is easy to see that (<ref>) is compatible with the SWAP^*symmetry (<ref>) and the two fermion parities, however, it explicitly breaks time-reversal symmetry, namely
𝒯: { cos(ϕ_1+ϕ_2)
cos(ϕ_3+ϕ_4)
.↦{ -cos(ϕ_1+ϕ_2)
-cos(ϕ_3+ϕ_4)
.
.
We can see these backscattering terms are Cooper pair terms by re-fermionization. If we make a domain wall configuration ofm, standard calculation can explicitly show that at the time-reversal symmetry domain wall, each sector has exactly one complex fermion zero mode. Therefore, the symmetry assignment indeed carries the anomaly we want to study. So far we have constructed a reasonable conjecture for the theory in each building block in the doubled space. We note that this construction might not be unique. However, from theℤ_2classification of the ASPT state, one can infer that all possible constructions for the nontrivial state are equivalent in the sense of average anomaly.
Now we can come to the question of constructing an SASPT state using these building blocks. To that end, as usual, we should introduce symmetric gapping terms in each plaquette to get a symmetric gapped bulk, the only difference is now the construction is in the doubled space. In each plaquette, there are 16 bosonic modes in the doubled Hilbert space whoseK-matrix is given byK=K⊕-K⊕K⊕-K. We can find 8 null vectors and keep both the average time-reversal and the double subsystems fermion parityℤ_2,n^fas well as the swap symmetry. These symmetric Higgs terms read
ℒ_Higgs= cos(ϕ_1^+ϕ_4^+ϕ_1^+ϕ_4^)
+cos(ϕ_1^+ϕ_4^+ϕ_1^+ϕ_4^)
+cos(ϕ_1^+ϕ_4^+ϕ_1^+ϕ_4^)
+cos(ϕ_1^+ϕ_4^+ϕ_1^+ϕ_4^)
+cos(ϕ_2^-ϕ_3^+ϕ_2^-ϕ_3^)
+cos(ϕ_2^-ϕ_3^+ϕ_2^-ϕ_4^)
+cos(ϕ_2^-ϕ_3^+ϕ_2^-ϕ_3^)
+cos(ϕ_2^-ϕ_3^+ϕ_2^-ϕ_3^),
where the subscript Arabic numerals label the different components of a specific quantum wire, and the superscript Roman numerals label the different quantum wires within each plaquette.
A subtle point that needs additional care is that in the context of decoherence, all Higgs terms in doubled space should be able to be mapped to Kraus operators of some quantum channels in the physical Hilbert space. Our Higgs terms chosen here satisfy this requirement. Consider the first term in Eq. (<ref>), it can be mapped to the following Kraus operators in the physical Hilbert space,
K_1=cos(φ_1^+φ_1^), K_2=sin(φ_1^+φ_1^),
whereφ_1^,is mapped toϕ_1^,andϕ_4^,in the doubled Hilbert space by the Choi–Jamiołkowski isomorphism. Similarly, we can check all other terms in Eq. (<ref>) can be mapped back to some Kraus operators in the physical Hilbert space. Therefore, we have obtained an explicitly wire construction for a type- intrinsic SASPT in the doubled space.
78
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0] `
12 `$12
`&12 `#12 `1̂2 `_12 `%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Preskill(2018)]Preskill_2018
authorauthorJohn Preskill, titletitleQuantum
Computing in the NISQ era and beyond,
10.22331/q-2018-08-06-79journaljournalQuantum volume2, pages79 (year2018)NoStop
[Bernien et al.(2017)Bernien, Schwartz, Keesling, Levine, Omran, Pichler, Choi, Zibrov, Endres, Greiner, Vuletić, and Lukin]Bernien_2017
authorauthorHannes Bernien, authorSylvain Schwartz, authorAlexander Keesling, authorHarry Levine, authorAhmed Omran,
authorHannes Pichler, authorSoonwon Choi, authorAlexander S. Zibrov, authorManuel Endres, authorMarkus Greiner, authorVladan Vuletić, and authorMikhail D. Lukin, titletitleProbing many-body dynamics on a 51-atom
quantum simulator, 10.1038/nature24622journaljournalNature volume551,pages579–584 (year2017)NoStop
[Iqbal et al.(2023)Iqbal,
Tantivasadakarn, Gatterman, Gerber, Gilmore, Gresh, Hankin, Hewitt, Horst, Matheny, Mengle, Neyenhuis, Vishwanath, Foss-Feig, Verresen, andDreyer]Ruben_2023
authorauthorMohsin Iqbal, authorNathanan Tantivasadakarn, authorThomas M.Gatterman, authorJustin A.Gerber, authorKevinGilmore, authorDan Gresh, authorAaron Hankin,
authorNathan Hewitt, authorChandler V. Horst, authorMitchell Matheny, authorTanner Mengle, authorBrian Neyenhuis, authorAshvin Vishwanath, authorMichael Foss-Feig, authorRuben Verresen, and authorHenrik Dreyer, titletitleTopological Order from Measurements and
Feed-Forward on a Trapped Ion Quantum Computer, @noop (year2023), http://arxiv.org/abs/2302.01917arXiv:2302.01917 [quant-ph]NoStop
[Gu and Wen(2009)]PhysRevB.80.155131
authorauthorZheng-ChengGu and authorXiao-GangWen, titletitleTensor-entanglement-filtering renormalization approach and
symmetry-protected topological order,
10.1103/PhysRevB.80.155131journaljournalPhys.
Rev. B volume80, pages155131
(year2009)NoStop
[Chen et al.(2012)Chen,
Gu, Liu, and Wen]doi:10.1126/science.1227224
authorauthorXie Chen, authorZheng-Cheng Gu,
authorZheng-Xin Liu, andauthorXiao-Gang Wen,titletitleSymmetry-Protected
Topological Orders in Interacting Bosonic Systems,
10.1126/science.1227224journaljournalScience volume338, pages1604–1606
(year2012)NoStop
[Chen et al.(2014)Chen,
Lu, and Vishwanath]Chen_2014
authorauthorXie Chen, authorYuan-Ming Lu, and authorAshvin Vishwanath,titletitleSymmetry-protected
topological phases from decorated domain walls,
10.1038/ncomms4507journaljournalNature
Communications volume5 (year2014), 10.1038/ncomms4507NoStop
[Chen et al.(2013)Chen,
Gu, Liu, and Wen]PhysRevB.87.155114
authorauthorXie Chen, authorZheng-Cheng Gu,
authorZheng-Xin Liu, andauthorXiao-Gang Wen,titletitleSymmetry protected
topological orders and the group cohomology of their symmetry group, 10.1103/PhysRevB.87.155114journaljournalPhys. Rev. B volume87, pages155114 (year2013)NoStop
[Senthil(2015)]Senthil_2015
authorauthorT. Senthil, titletitleSymmetry-Protected Topological Phases of Quantum Matter, 10.1146/annurev-conmatphys-031214-014740journaljournalAnnual Review of Condensed Matter Physics volume6, pages299–324 (year2015)NoStop
[Vishwanath and Senthil(2013)]Vishwanath_2013
authorauthorAshvin Vishwanath and authorT. Senthil, titletitlePhysics of
Three-Dimensional Bosonic Topological Insulators: Surface-Deconfined
Criticality and Quantized Magnetoelectric Effect,
10.1103/physrevx.3.011016journaljournalPhysical Review X volume3, pages011016 (year2013)NoStop
[Wang et al.(2013)Wang,
Potter, and Senthil]Wang_2013
authorauthorChong Wang, authorAndrew C. Potter, and authorT. Senthil, titletitleGapped symmetry
preserving surface state for the electron topological insulator, 10.1103/physrevb.88.115137journaljournalPhysical Review B volume88, pages115137 (year2013)NoStop
[Wang et al.(2014)Wang,
Potter, and Senthil]Wang_2014
authorauthorChong Wang, authorAndrew C. Potter, and authorT. Senthil, titletitleClassification
of Interacting Electronic Topological Insulators in Three Dimensions, 10.1126/science.1243326journaljournalScience volume343, pages629–631 (year2014)NoStop
[Briegel and Raussendorf(2001)]Briegel_2001
authorauthorHans J. Briegel and authorRobert Raussendorf, titletitlePersistent
Entanglement in Arrays of Interacting Particles,
10.1103/PhysRevLett.86.910journaljournalPhysical Review Letters volume86, pages910–913 (year2001)NoStop
[Raussendorf et al.(2005)Raussendorf, Bravyi, and Harrington]Raussendorf_2005
authorauthorRobert Raussendorf, authorSergey Bravyi, and authorJim Harrington, titletitleLong-range
quantum entanglement in noisy cluster states,
10.1103/PhysRevA.71.062313journaljournalPhysical Review A volume71, pages062313 (year2005)NoStop
[Briegel et al.(2009)Briegel, Browne, Dür, Raussendorf, and den Nest]Briegel_2009
authorauthorH. J. Briegel, authorD. E. Browne,
authorW. Dür, authorR. Raussendorf, and authorM. Van den Nest, titletitleMeasurement-based quantum computation, 10.1038/nphys1157journaljournalNature Physics volume5, pages19–26 (year2009)NoStop
[Aguado et al.(2008)Aguado,
Brennen, Verstraete, and Cirac]Aguado_2008
authorauthorM. Aguado, authorG. K. Brennen,
authorF. Verstraete, andauthorJ. I. Cirac, titletitleCreation, Manipulation, and Detection
of Abelian and Non-Abelian Anyons in Optical Lattices,
10.1103/PhysRevLett.101.260501journaljournalPhysical Review Letters volume101, pages260501 (year2008)NoStop
[Bolt et al.(2016)Bolt,
Duclos-Cianci, Poulin, and Stace]Bolt_2016
authorauthorA. Bolt, authorG. Duclos-Cianci,
authorD. Poulin, and authorT.M. Stace,titletitleFoliated Quantum
Error-Correcting Codes, 10.1103/PhysRevLett.117.070501journaljournalPhysical Review Lettersvolume117, pages070501 (year2016)NoStop
[Stephen et al.(2017)Stephen, Wang, Prakash, Wei, and Raussendorf]Stephen_2017
authorauthorDavid T.Stephen, authorDong-ShengWang, authorAbhishodhPrakash, authorTzu-ChiehWei, and authorRobertRaussendorf, titletitleComputational Power of Symmetry-Protected Topological Phases, 10.1103/PhysRevLett.119.010504journaljournalPhysical Review Letters volume119,pages010504 (year2017)NoStop
[Raussendorf et al.(2019)Raussendorf, Okay, Wang, Stephen, and Nautrup]Raussendorf_2019
authorauthorRobert Raussendorf, authorCihan Okay, authorDong-Sheng Wang,
authorDavid T. Stephen, andauthorHendrik Poulsen Nautrup,titletitleComputationally Universal
Phase of Quantum Matter, 10.1103/PhysRevLett.122.090501journaljournalPhysical Review Lettersvolume122, pages090501 (year2019)NoStop
[Lu et al.(2022)Lu,
Lessa, Kim, and Hsieh]Tim_2022
authorauthorTsung-ChengLu, authorLeonardo A.Lessa, authorIsaac H.Kim, and authorTimothy H.Hsieh, titletitleMeasurement as a shortcut to long-range entangled quantum matter,@noop (year2022), http://arxiv.org/abs/2206.13527arXiv:2206.13527 [cond-mat.str-el]NoStop
[Lee et al.(2022a)Lee, Ji, Bi, and Fisher]JYLee_2022b
authorauthorJong YeonLee, authorWenjie Ji, authorZhen Bi, andauthorMatthew P. A. Fisher,titletitleDecoding
Measurement-Prepared Quantum Phases and Transitions: from Ising model to
gauge theory, and beyond, @noop (year2022a), http://arxiv.org/abs/2208.11699arXiv:2208.11699 [cond-mat.str-el]NoStop
[Zhu et al.(2022)Zhu,
Tantivasadakarn, Vishwanath, Trebst, and Verresen]Zhu_2022
authorauthorGuo-Yi Zhu, authorNathanan Tantivasadakarn, authorAshvin Vishwanath, authorSimon Trebst, and authorRuben Verresen, titletitleNishimori's
cat: stable long-range entanglement from finite-depth unitaries and weak
measurements, @noop (year2022), http://arxiv.org/abs/2208.11136arXiv:2208.11136 [quant-ph]NoStop
[Lu et al.(2023)Lu,
Zhang, Vijay, and Hsieh]Tim_2023a
authorauthorTsung-ChengLu, authorZhehao Zhang, authorSagar Vijay, and authorTimothy H. Hsieh,titletitleMixed-state long-range order
and criticality from measurement and feedback, @noop (year2023), http://arxiv.org/abs/2303.15507arXiv:2303.15507 [cond-mat.str-el]NoStop
[Guo et al.(2023)Guo,
Zhang, Bi, and Yang]Guo_2023
authorauthorYuchen Guo, authorJian-Hao Zhang,
authorZhen Bi, and authorShuo Yang, titletitleTriggering Boundary Phase Transitions through
Bulk Measurements in 2D Cluster States, @noop (year2023), http://arxiv.org/abs/2305.14231arXiv:2305.14231
[quant-ph]NoStop
[Devakul and Williamson(2018)]Devakul_2018
authorauthorTrithep Devakul and authorDominic J. Williamson, titletitleUniversal
quantum computation using fractal symmetry-protected cluster phases, 10.1103/PhysRevA.98.022332journaljournalPhysical Review A volume98, pages022332 (year2018)NoStop
[Stephen et al.(2019)Stephen, Nautrup, Bermejo-Vega,
Eisert, and Raussendorf]Stephen_2019
authorauthorDavid T.Stephen, authorHendrik PoulsenNautrup, authorJuaniBermejo-Vega, authorJensEisert, and authorRobertRaussendorf, titletitleSubsystem symmetries, quantum cellular automata, and computational phases of
quantum matter, 10.22331/q-2019-05-20-142journaljournalQuantum volume3,pages142 (year2019)NoStop
[Daniel et al.(2020)Daniel,
Alexander, and Miyake]Daniel_2020
authorauthorAustin K.Daniel, authorRafael N.Alexander, and authorAkimasaMiyake, titletitleComputational universality of symmetry-protected topologically ordered
cluster phases on 2D Archimedean lattices,
10.22331/q-2020-02-10-228journaljournalQuantum volume4, pages228
(year2020)NoStop
[Roberts and Bartlett(2020)]Roberts_2020
authorauthorSam Roberts and authorStephen D. Bartlett, titletitleSymmetry-Protected Self-Correcting Quantum Memories,
10.1103/PhysRevX.10.031041journaljournalPhysical Review X volume10, pages031041 (year2020)NoStop
[Vijay et al.(2016)Vijay,
Haah, and Fu]Vijay_2016
authorauthorSagar Vijay, authorJeongwan Haah,
and authorLiang Fu,titletitleFracton topological order,
generalized lattice gauge theory, and duality,
10.1103/PhysRevB.94.235157journaljournalPhysical Review B volume94, pages235157 (year2016)NoStop
[Nandkishore and Hermele(2019)]Nandkishore_2019
authorauthorRahul M.Nandkishore and authorMichaelHermele, titletitleFractons, 10.1146/annurev-conmatphys-031218-013604journaljournalAnnual Review of Condensed Matter
Physics volume10, pages295–313
(year2019)NoStop
[Pretko et al.(2020)Pretko,
Chen, and You]Pretko_2020
authorauthorMichael Pretko, authorXie Chen, and authorYizhi You,titletitleFracton phases of matter, 10.1142/S0217751X20300033journaljournalInternational Journal of Modern Physics A volume35, pages2030003 (year2020)NoStop
[Pai and Hermele(2019)]Pai_2019
authorauthorShriya Pai and authorMichael Hermele, titletitleFracton fusion
and statistics, 10.1103/PhysRevB.100.195136journaljournalPhysical Review B volume100, pages195136 (year2019)NoStop
[You et al.(2018)You,
Devakul, Burnell, and Sondhi]PhysRevB.98.035112
authorauthorYizhi You, authorTrithep Devakul,
authorF. J. Burnell, andauthorS. L. Sondhi, titletitleSubsystem symmetry protected
topological order, 10.1103/PhysRevB.98.035112journaljournalPhys. Rev. B volume98, pages035112 (year2018)NoStop
[Devakul et al.(2019)Devakul, You, Burnell, and Sondhi]Devakul_2019
authorauthorTrithep Devakul, authorYizhi You,
authorF. J. Burnell, andauthorShivaji Sondhi,titletitleFractal Symmetric Phases of
Matter, 10.21468/SciPostPhys.6.1.007journaljournalSciPost Physics volume6 (year2019), 10.21468/SciPostPhys.6.1.007NoStop
[Devakul et al.(2018)Devakul, Williamson, and You]PhysRevB.98.235121
authorauthorTrithep Devakul, authorDominic J. Williamson, and authorYizhi You, titletitleClassification of
subsystem symmetry-protected topological phases,
10.1103/PhysRevB.98.235121journaljournalPhys.
Rev. B volume98, pages235121
(year2018)NoStop
[Devakul et al.(2020)Devakul, Shirley, and Wang]PhysRevResearch.2.012059
authorauthorTrithep Devakul, authorWilbur Shirley, and authorJuven Wang, titletitleStrong planar
subsystem symmetry-protected topological phases and their dual fracton
orders, 10.1103/PhysRevResearch.2.012059journaljournalPhys. Rev. Res. volume2, pages012059 (year2020)NoStop
[Williamson et al.(2019)Williamson, Bi, and Cheng]Williamson_2019
authorauthorDominic J.Williamson, authorZhenBi, and authorMengCheng, titletitleFractonic matter in symmetry-enriched U(1) gauge theory, 10.1103/PhysRevB.100.125150journaljournalPhysical Review B volume100, pages125150 (year2019)NoStop
[May-Mann and Hughes(2019)]May_Mann_2019
authorauthorJulian May-Mann and authorTaylor L. Hughes, titletitleCorner modes and
ground-state degeneracy in models with gaugelike subsystem symmetries, 10.1103/PhysRevB.100.165108journaljournalPhysical Review B volume100, pages165108 (year2019)NoStop
[Stephen et al.(2020)Stephen, Garre-Rubio, Dua, andWilliamson]Stephen_2020
authorauthorDavid T.Stephen, authorJoséGarre-Rubio, authorArpitDua, and authorDominic J.Williamson, titletitleSubsystem symmetry enriched topological order in three dimensions, 10.1103/PhysRevResearch.2.033331journaljournalPhysical Review Research volume2,pages033331 (year2020)NoStop
[May-Mann and Hughes(2021)]May_Mann_2021
authorauthorJulian May-Mann and authorTaylor L. Hughes, titletitleTopological
dipole conserving insulators and multipolar responses,
10.1103/PhysRevB.104.085136journaljournalPhysical Review B volume104, pages085136 (year2021)NoStop
[May-Mann et al.(2022)May-Mann, You, Hughes, and Bi]PhysRevB.105.245122
authorauthorJulian May-Mann, authorYizhi You,
authorTaylor L. Hughes, andauthorZhen Bi, titletitleInteraction-enabled fractonic
higher-order topological phases,
10.1103/PhysRevB.105.245122journaljournalPhys. Rev. B volume105, pages245122 (year2022)NoStop
[Zhang et al.(2022)Zhang, Cheng, and Bi]2022arXiv221015596Z
authorauthorJian-HaoZhang, authorMengCheng, and authorZhenBi, titletitleClassification and construction of interacting fractonic higher-order
topological phases, @noop journaljournalarXiv:2210.15596 (year2022), http://arxiv.org/abs/2210.15596arXiv:2210.15596 [cond-mat.str-el]NoStop
[Fu(2011)]PhysRevLett.106.106802
authorauthorLiang Fu, titletitleTopological
crystalline insulators, 10.1103/PhysRevLett.106.106802journaljournalPhys. Rev. Lett. volume106, pages106802 (year2011)NoStop
[Hsieh et al.(2012)Hsieh,
Lin, Liu, Duan, Bansil, and Fu]Hsieh_2012
authorauthorTimothy H.Hsieh, authorHsin Lin, authorJunwei Liu,
authorWenhui Duan, authorArun Bansil, and authorLiang Fu, titletitleTopological crystalline insulators in the SnTe
material class, 10.1038/ncomms1969journaljournalNature Communications volume3 (year2012), 10.1038/ncomms1969NoStop
[Isobe and Fu(2015)]PhysRevB.92.081304
authorauthorHiroki Isobe and authorLiang Fu,titletitleTheory of interacting
topological crystalline insulators,
10.1103/PhysRevB.92.081304journaljournalPhys.
Rev. B volume92, pages081304
(year2015)NoStop
[Tang et al.(2019)Tang,
Po, Vishwanath, and Wan]Tang_2019
authorauthorFeng Tang, authorHoi Chun Po,
authorAshvin Vishwanath, andauthorXiangang Wan, titletitleComprehensive search for topological
materials using symmetry indicators,
10.1038/s41586-019-0937-5journaljournalNature volume566, pages486–489
(year2019)NoStop
[Kruthoff et al.(2017)Kruthoff, de Boer, van Wezel, Kane, and Slager]Kruthoff_2017
authorauthorJorrit Kruthoff, authorJan de Boer,
authorJasper van Wezel,
authorCharles L. Kane, andauthorRobert-Jan Slager,titletitleTopological Classification
of Crystalline Insulators through Band Structure Combinatorics, 10.1103/PhysRevX.7.041069journaljournalPhysical Review X volume7, pages041069 (year2017)NoStop
[Slager et al.(2012)Slager,
Mesaros, Juričić, andZaanen]Slager_2012
authorauthorRobert-JanSlager, authorAndrejMesaros, authorVladimirJuričić, and authorJan Zaanen, titletitleThe space group classification of topological band-insulators, 10.1038/nphys2513journaljournalNature Physics volume9, pages98–102 (year2012)NoStop
[Bultinck et al.(2019)Bultinck, Bernevig, and Zaletel]Bultinck_2019
authorauthorNick Bultinck, authorB. Andrei Bernevig, and authorMichael P.Zaletel, titletitleThree-dimensional superconductors with hybrid higher-order topology, 10.1103/PhysRevB.99.125149journaljournalPhysical Review B volume99, pages125149 (year2019)NoStop
[Laubscher et al.(2019)Laubscher, Loss, and Klinovaja]Laubscher_2019
authorauthorKatharinaLaubscher, authorDanielLoss, and authorJelenaKlinovaja, titletitleFractional topological superconductivity and parafermion corner states, 10.1103/PhysRevResearch.1.032017journaljournalPhysical Review Research volume1, pages032017 (year2019)NoStop
[Thorngren and Else(2018)]Thorngren_2018
authorauthorRyan Thorngren and authorDominic V.Else, titletitleGauging Spatial Symmetries and the Classification of Topological
Crystalline Phases, 10.1103/PhysRevX.8.011040journaljournalPhysical Review X volume8, pages011040 (year2018)NoStop
[Song et al.(2017)Song,
Huang, Fu, and Hermele]PhysRevX.7.011020
authorauthorHao Song, authorSheng-Jie Huang,
authorLiang Fu, and authorMichael Hermele, titletitleTopological Phases Protected by Point
Group Symmetry, 10.1103/PhysRevX.7.011020journaljournalPhys. Rev. X volume7, pages011020 (year2017)NoStop
[Huang et al.(2017)Huang,
Song, Huang, and Hermele]Huang_2017
authorauthorSheng-JieHuang, authorHao Song, authorYi-Ping Huang, and authorMichael Hermele,titletitleBuilding crystalline
topological phases from lower-dimensional states,
10.1103/PhysRevB.96.205106journaljournalPhysical Review B volume96, pages205106 (year2017)NoStop
[Zhang et al.(2020)Zhang,
Wang, Yang, Qi, andGu]PhysRevB.101.100501
authorauthorJian-HaoZhang, authorQing-RuiWang, authorShuo Yang, authorYang Qi, andauthorZheng-Cheng Gu,titletitleConstruction and
classification of point-group symmetry-protected topological phases in
two-dimensional interacting fermionic systems,
10.1103/PhysRevB.101.100501journaljournalPhys. Rev. B volume101, pages100501 (year2020)NoStop
[Zhang et al.(2022)Zhang,
Yang, Qi, and Gu]PhysRevResearch.4.033081
authorauthorJian-HaoZhang, authorShuo Yang, authorYang Qi, andauthorZheng-Cheng Gu,titletitleReal-space construction of
crystalline topological superconductors and insulators in 2d interacting
fermionic systems, 10.1103/PhysRevResearch.4.033081journaljournalPhys. Rev. Res. volume4, pages033081 (year2022)NoStop
[Zhang(2022)]PhysRevB.106.L020503
authorauthorJian-HaoZhang, titletitleStrongly correlated crystalline higher-order topological phases in
two-dimensional systems: A coupled-wire study,
10.1103/PhysRevB.106.L020503journaljournalPhys. Rev. B volume106, pagesL020503 (year2022)NoStop
[Zhang et al.(2022)Zhang, Zhang, Gu, Zhang, and Yang]Zhang:2022zbp
authorauthorHao-Ran Zhang, authorJian-Hao Zhang, authorZheng-Cheng Gu, authorRui-Xing Zhang, and authorShuo Yang, titletitleIntrinsically
Interacting Higher-Order Topological Superconductors, @noop journaljournalarXiv:2212.13013 (year2022), http://arxiv.org/abs/2212.13013arXiv:2212.13013
[cond-mat.str-el]NoStop
[Zhang et al.(2022a)Zhang, Qi, andGu]3DcSPT
authorauthorJian-HaoZhang, authorYang Qi, and authorZheng-Cheng Gu, titletitleConstruction and
classification of crystalline topological superconductor and insulators in
three-dimensional interacting fermion systems, @noop (year2022a), http://arxiv.org/abs/2204.13558arXiv:2204.13558 [cond-mat.str-el]NoStop
[Fulga et al.(2014)Fulga,
van Heck, Edge, and Akhmerov]Fulga_2014
authorauthorI. C. Fulga, authorB. van Heck,
authorJ. M. Edge, andauthorA. R. Akhmerov,titletitleStatistical topological
insulators, 10.1103/PhysRevB.89.155424journaljournalPhysical Review B volume89, pages155424 (year2014)NoStop
[Milsted et al.(2015)Milsted, Seabra, Fulga, Beenakker, and Cobanera]Milsted_2015
authorauthorA. Milsted, authorL. Seabra,
authorI. C. Fulga, authorC. W. J. Beenakker, andauthorE. Cobanera, titletitleStatistical translation invariance
protects a topological insulator from interactions,
10.1103/PhysRevB.92.085139journaljournalPhysical Review B volume92, pages085139 (year2015)NoStop
[Kimchi et al.(2018)Kimchi,
Nahum, and Senthil]Kimchi_2018
authorauthorItamar Kimchi, authorAdam Nahum, and authorT. Senthil,titletitleValence Bonds in Random
Quantum Magnets: Theory and Application to YbMgGa_4,
10.1103/PhysRevX.8.031028journaljournalPhysical Review X volume8, pages031028 (year2018)NoStop
[de Groot et al.(2022)de Groot, Turzillo, and Schuch]de_Groot_2022
authorauthorCarolinede Groot, authorAlexTurzillo, and authorNorbertSchuch, titletitleSymmetry Protected Topological Order in Open Quantum Systems, 10.22331/q-2022-11-10-856journaljournalQuantum volume6, pages856 (year2022)NoStop
[Ma and Wang(2022)]MaWangASPT
authorauthorRuochen Ma and authorChong Wang,titletitleAverage Symmetry-Protected
Topological Phases, @noop (year2022),http://arxiv.org/abs/2209.02723arXiv:2209.02723
[cond-mat.str-el]NoStop
[Lee et al.(2022b)Lee, You, and Xu]JYLee_2022
authorauthorJong YeonLee, authorYi-ZhuangYou, and authorCenkeXu, titletitleSymmetry protected topological phases under decoherence, @noop (year2022b), http://arxiv.org/abs/2210.16323arXiv:2210.16323 [cond-mat.str-el]NoStop
[Zhang et al.(2022b)Zhang, Qi, andBi]Zhang_2022
authorauthorJian-HaoZhang, authorYang Qi, and authorZhen Bi,titletitleStrange Correlation
Function for Average Symmetry-Protected Topological Phases, @noop
(year2022b), http://arxiv.org/abs/2210.17485arXiv:2210.17485 [cond-mat.str-el]NoStop
[Ma et al.(2023)Ma,
Zhang, Bi, Cheng, andWang]average_2023
authorauthorRuochen Ma, authorJian-Hao Zhang,
authorZhen Bi, authorMeng Cheng, and authorChong Wang, titletitleTopological Phases with Average Symmetries: the
Decohered, the Disordered, and the Intrinsic, @noop (year2023), http://arxiv.org/abs/2305.16399arXiv:2305.16399 [cond-mat.str-el]NoStop
[Wang and Gu(2018)]Wang_2018
authorauthorQing-RuiWang and authorZheng-ChengGu, titletitleTowards a Complete Classification of Symmetry-Protected Topological Phases
for Interacting Fermions in Three Dimensions and a General Group
Supercohomology Theory, 10.1103/physrevx.8.011055journaljournalPhysical Review X volume8, pages011055 (year2018)NoStop
[Wang and Gu(2020)]Wang_2020
authorauthorQing-RuiWang and authorZheng-ChengGu, titletitleConstruction and Classification of Symmetry-Protected Topological Phases in
Interacting Fermion Systems, 10.1103/physrevx.10.031055journaljournalPhysical Review X volume10, pages031055 (year2020)NoStop
[Kane et al.(2002)Kane,
Mukhopadhyay, and Lubensky]Kane_2002
authorauthorC. L. Kane, authorRanjan Mukhopadhyay, and authorT. C.Lubensky, titletitleFractional Quantum Hall Effect in an Array of Quantum Wires, 10.1103/PhysRevLett.88.036401journaljournalPhysical Review Letters volume88,pages036401 (year2002)NoStop
[Teo and Kane(2014)]Teo_2014
authorauthorJeffrey C. Y.Teo and authorC. L.Kane, titletitleFrom
Luttinger liquid to non-Abelian quantum Hall states,
10.1103/PhysRevB.89.085101journaljournalPhysical Review B volume89, pages085101 (year2014)NoStop
[Haldane(1995)]PhysRevLett.74.2090
authorauthorF. D. M.Haldane, titletitleStability of chiral luttinger liquids and abelian quantum hall states, 10.1103/PhysRevLett.74.2090journaljournalPhys. Rev. Lett. volume74, pages2090–2093 (year1995)NoStop
[Ning et al.(2021)Ning,
Wang, Wang, and Gu]PhysRevB.104.075151
authorauthorShang-QiangNing, authorChenjieWang, authorQing-RuiWang, and authorZheng-ChengGu, titletitleEdge
theories of two-dimensional fermionic symmetry protected topological phases
protected by unitary Abelian symmetries,
10.1103/PhysRevB.104.075151journaljournalPhys. Rev. B volume104, pages075151 (year2021)NoStop
[Wang et al.(2021)Wang,
Ning, and Cheng]Wang_2021
authorauthorQing-RuiWang, authorShang-QiangNing, and authorMengCheng, titletitleDomain Wall Decorations, Anomalies and Spectral Sequences in Bosonic
Topological Phases, @noop (year2021),http://arxiv.org/abs/2104.13233arXiv:2104.13233
[cond-mat.str-el]NoStop
[Lu and Vishwanath(2012)]PhysRevB.86.125119
authorauthorYuan-MingLu and authorAshvinVishwanath, titletitleTheory and classification of interacting integer topological phases in two
dimensions: A chern-simons approach,
10.1103/PhysRevB.86.125119journaljournalPhys.
Rev. B volume86, pages125119
(year2012)NoStop
[Heinrich and Levin(2018)]Heinrich_2018
authorauthorChris Heinrich and authorMichael Levin, titletitleCriteria for
protected edge modes with ℤ_2 symmetry,
10.1103/PhysRevB.98.035101journaljournalPhysical Review B volume98, pages035101 (year2018)NoStop
[Levin and Gu(2012)]Levin-Gu
authorauthorMichael Levin and authorZheng-Cheng Gu, titletitleBraiding statistics
approach to symmetry-protected topological phases,
10.1103/PhysRevB.86.115109journaljournalPhys.
Rev. B volume86, pages115109
(year2012)NoStop
[Sieberer et al.(2016)Sieberer, Buchhold, and Diehl]Sieberer_2016
authorauthorL. M. Sieberer, authorM. Buchhold,
and authorS. Diehl,titletitleKeldysh field theory for
driven open quantum systems,
10.1088/0034-4885/79/9/096001journaljournalReports on Progress in Physics volume79,pages096001 (year2016)NoStop
[Garratt et al.(2022)Garratt, Weinstein, and Altman]Ehud_2022
authorauthorSamuel J.Garratt, authorZackWeinstein, and authorEhudAltman, titletitleMeasurements conspire nonlocally to restructure critical quantum states,@noop (year2022), http://arxiv.org/abs/2207.09476arXiv:2207.09476 [cond-mat.stat-mech]NoStop
[Giamarchi(2003)]Giamarchi_2003
authorauthorThierry Giamarchi, 10.1093/acprof:oso/9780198525004.001.0001titleQuantum Physics in One Dimension (publisherOxford University Press, year2003)NoStop |
http://arxiv.org/abs/2307.07338v3 | 20230714133701 | Collisions of red giants in galactic nuclei | [
"Taeho Ryu",
"Pau Amaro Seoane",
"Andrew M. Taylor",
"Sebastian T. Ohlmann"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.GA",
"astro-ph.SR"
] |
firstpage–lastpage
PiTL: Cross-modal Retrieval with Weakly-supervised Vision-language Pre-training via Prompting
Jorma Laaksonen
=============================================================================================
In stellar-dense environments, stars can collide with each other. For collisions close to a supermassive black hole (SMBH), the collisional kinetic energy can be so large that the colliding stars can be completely destroyed, potentially releasing an amount of energy comparable to that of a supernova. Such violent collisions, which we call BH-driven disruptive collisions (BDCs), have been examined mostly analytically, with the non-linear hydrodynamical effects being left largely unstudied. Using the moving-mesh hydrodynamics code AREPO, we investigate high-velocity (>10^3 km/s) collisions between 1M_⊙ giants with varying radii, impact parameters, and initial approaching velocities, and estimate their observables. Very strong shocks across the collision surface efficiently convert ≳10% of the initial kinetic energy into radiation energy. The outcome is a gas cloud expanding supersonically, homologously, and quasi-spherically, generating a flare with a peak luminosity ≃ 10^41-10^44 erg/s in the extreme UV band (≃ 10 eV). The luminosity decreases approximately following a power-law t^-0.7 initially, then t^-0.4 after t≃10 days at which point it would be bright in the optical band (≲ 1eV). Subsequent, and possibly even brighter, emission would be generated due to the accretion of the gas cloud onto the nearby SMBH, possibly lasting up to multi-year timescales. This inevitable BH-collision product interaction can contribute to the growth of BHs at all mass scales, in particular, seed BHs at high redshifts. Furthermore, the proximity of the events to the central BH makes them a potential tool for probing the existence of dormant BHs, even very massive ones which cannot be probed by tidal disruption events.
§ INTRODUCTION
Dynamical interactions between stars in stellar-dense environments, e.g., globular clusters and galactic centers, play a crucial role in driving the evolution of the host and determining its thermodynamic state <cit.>. If the stellar density is sufficiently high, stars can collide with relative velocities comparable to the dispersion velocity of the host. In globular clusters, up to 40% of main-sequence stars in the core would undergo a collision during the lifetime of the cluster <cit.>. For clusters with very high number densities (≳ 10^7^-3), a star may suffer multiple such collisions <cit.>.
Galactic centers are extreme environments where stars are densely packed (e.g., 10^6-10^7^-3 for nuclear clusters, and references therein) around a supermassive black hole (SMBH). Because the relative velocity between stars near the SMBH is roughly the Keplerian speed ∝ r^-0.5, stars near the BH would collide at very high speeds (e.g., v_ rel≳ 2000km/s within ≃ 0.1 around a 10^7 BH). If the kinetic energy of the collision (≳10^50 erg for a collision between two stars with mass =1 and v_ rel≳ 2000 km/s) is greater than the binding energy of the stars (10^48-10^49 erg for =1), the stars would be completely destroyed, leaving behind an expanding gas cloud. If even a small fraction of the collisional kinetic energy is converted into radiation, the high-velocity collision can generate a bright electromagnetic transient from the Galactic nucleus region.
The total rates of such events between main-sequence stars have been estimated to be 10^-4-10^-5 yr^-1 galaxy^-1 <cit.> if the core is fully relaxed to the Bahcall-Wolf power-law ∝ r^-7/4 <cit.>[While the Bahcall-Wolf solution is a mathematically correct solution when all stars have the same mass, in the realitistic situation where the stellar mass distribution is inhomogeneous, the slope can be steeper <cit.>.]. The rate for collisions between giants could be higher due to larger cross-sections <cit.>.
However, if collisions continuously deplete the inner part of the stellar-density cusp, the rate would become smaller, e.g., ≃ 10^-5-10^-7 yr^-1 galaxy^-1 for main-sequence stars, depending on the assumption of the stellar influx into the center <cit.>. Since these powerful collisions essentially destroy stars in galactic center environments, these events can affect the frequency of other types of nuclear transients. For example, <cit.> suggests that high-velocity collisions can almost completely suppress extreme mass-ratio inspirals.
High-velocity collisions between main-sequence stars <cit.> have been studied using numerical simulations, focusing on the mass ejection and the impact of such collisions on the thermodynamic state of the host, rather than their observation signatures. Recently, <cit.> analytically investigated the observables of high-velocity collisions between stars of various types in galactic nuclei. They found that the peak luminosity of high-velocity collisions can be as high as 10^44η_ rad erg/s. Here, η_ rad is one of the determining factors which measures how efficiently the initial kinetic energy is converted into radiation energy. If η_ rad is of order unity, the peak luminosity can be comparable to different types of nuclear transients, such as tidal disruption events. However, η_ rad in their work was left as a free parameter because evaluating η_ rad involves non-linear hydrodynamics effects such as shocks, which cannot be done analytically.
In this paper, we investigate the hydrodynamics of high-velocity collisions, or black hole-drive disruptive collisions (BDCs), between 1 giants and numerically estimate the radiation conversion efficiency and their observables, using the moving-mesh hydrodynamics code AREPO <cit.>. In the simulations, we consider collisions with v_ rel=10^4 km/s between two identical 1 giants with four different radii (R_⋆ =10, 20, 50, and 100), four impact parameters (b=0.04, 0.2, 0.4, and 0.8), and three initial approaching velocity (v_ rel=10^4 km/s, 5×10^3 km/s, and 2.5×10^3 km/s). The largest approaching speed corresponds to roughly the largest relative velocity for stellar collisions near the BH, i.e., the Keplerian velocity at the smallest possible distance from the BH where at least two stars exist for a typical stellar density around a massive BH assuming the Bahcall-Wolf power law: r≃ 10^-5 for 10^5 BH, ≃ 10^-4 for 10^6 BH, and ≃ 10^-3 for 10^7 BH. Because collisions with lower relative velocities are expected to create fainter transients, our simulations with the largest v_ rel would provide an upper limit for the luminosity and total radiated energy of these events.
This paper is organized as follows. We describe our methods in <ref>, including the code description ( <ref>), stellar models ( <ref>), and initial conditions ( <ref>). Then we present our results in <ref> and discuss astrophysical implications for the collisions in <ref>. Finally, we summarize and conclude in <ref>.
§ METHODS
§.§ Code
We perform a suite of 3D hydrodynamic simulations of BDCs between red giants using the massively parallel gravity and magnetohydrodynamics moving-mesh code AREPO <cit.>. The code inherits advantages of the two widely used hydrodynamical schemes, the Lagrangian smoothed particle method and the Eulerian finite-volume method, allowing for an accurate treatment of supersonic flows and shock capturing without introducing an artificial viscosity, and low advection errors. We use the ideal equation of states that takes into account radiation pressure assuming local thermodynamic equilibrium,
P = ρ k_ BT/μ m_ p + 4σ/3cT^4,
where P is the total pressure, ρ the density, k_ B the Boltzmann constant, T the temperature, μ=0.62 the mean molecular weight, m_ p the proton mass, and σ the Stefan-Boltzmann constant.
§.§ Stellar model
We adopt the internal structure of giants evolved using the 1D stellar evolution code <cit.> to model giants in 3D. The star has an initial mass =1 and a metallicity of Z=0.02. We treat the mixing processes and winds following <cit.>. More specifically, we model convection using the mixing length theory with a mixing length parameter of 1.81. We adopt the <cit.> criterion to determine the boundary of the convective regions an exponential overshoot prescription <cit.> with parameters f = 0.016 and f_0 = 0.008 at the top of the core and f = 0.0174, f_0 = 0.0087 at the bottom of the hydrogen-burning shell. Semiconvection is treated following <cit.> with an efficiency factor of 0.1. We allow the star on the red giant branch to lose mass via wind following the prescription from <cit.> with scaling factor of 0.1.
Figure <ref> shows the evolution of the 1 star in a Hertzsprung–Russell diagram until it reaches the tip of the red-giant branch. We take the giants at four different evolutionary stages where their radii are ≃ 10, 20, 50, and 100 (indicated by the star symbols in the figure).
We construct 3D giants from the 1D giant models using the method developed in <cit.> with 10^6 cells. Modeling the entire giant with gas cells is computationally expensive given very steep density gradients. So instead, we model the inner part of the star with a point particle, representing effectively the core. Furthermore, we place gas cells on top of it such that the internal structure above the core matches with the model while the entire star stays in hydrostatic equilibrium. The point particle interacts only gravitationally with gas: it only gravitationally pulls the envelope which is cancelled by the pressure gradient of the gas when the star is in isolation. We choose that the size of the region modelled using a point particle is 5% of the stellar radius (“point particle radius”). The point particle radius is in fact greater than the size of the core (R≃ 0.02). This choice is justified by the fact that the mass of the core is effectively the same as the enclosed mass within ≃0.05 (vertical dotted lines), as illustrated in Figure <ref>. This means the total binding energy inside our 3D giants is essentially the same as what we would have had when the point particle radius were exactly the core radius. With this choice of the point radius, while we reduce computational costs significantly, we lose only a small fraction of the total energy budget inside the star.
We then relax the 3D stars fully in isolation, which usually takes 5 - 10 stellar dynamical times (√(^3/G)).
Figure <ref> shows the radial density of the fully relaxed stars above the point particle (top panel) and their errors (bottom panel) relative to the models. The relative errors of the density of the inner part of the stars, where most of the mass is concentrated, are less than a few %. Although the errors at the surface are relatively large, the deviation of such small masses at the surface, corresponding to the plateau at the end of each line in Figure <ref>, should not affect our results.
We performed resolution tests for nearly head-on collisions between giants with =100 with different resolutions. The choice of the collision parameters are motivated by the fact that the impact of the shock in such a collision is the strongest (see Figure <ref>), which requires the highest resolution. We first constructed giants with N=2.5×10^5, 5×10^5, 10^6, 2×10^6 and 4×10^6 cells and performed the collision experiments. We find that the results have already converged very well when N≥10^6: the conversion factor η_ rad, defined in Equation <ref>, differs by less than 1%. In fact, the difference in η_ rad between N≤5×10^5 and N=10^6 is already reasonably small, ≲20% for N=2.5× 10^5 and ≲ 10% for N=5× 10^5 relative to the case with N≥ 10^6. Furthermore, we confirmed that the total energy is conserved within ≲ 1% until the end of the simulations.
§.§ Initial conditions
We place two identical stars, initially separated by 10, on a hyperbolic orbit with some relative velocity at infinity v_ rel. So it takes 10/v_ rel≃ (0.1 - 1) days, depending on and v_ rel, until the two stars collide. We note that the time is measured since collision in this paper: accordingly, the initial time of the simulations is t≃ -(0.1-1) days. Those stars are embedded in a low-density background medium with density of 10^-18/^3 and temperature of 10^4 K. The background density is comparable to the density of the interstellar medium at the Galactic center ranging between 10^5 to
10^6 particles per^3 <cit.> at Galactic center distances that dominate the collision rate <cit.>. We discuss the impact of the background density and temperature on the properties of collision products in <ref>. Our fiducial model is the near-head-on collision between the two 10 giants initially approaching towards each other at v_ rel=10^4km/s with an impact parameter b=0.04. Here, b=0.04 is the smallest possible impact parameter given the softening length of the point particle: in other words, the gravity of the point particles becomes inaccurate at the closest approach distance with b<0.04. For this giant, we additionally consider off-axis collisions with larger impact parameters, b=0.2, 0.4, and 0.8, and two additional v_ rel = 2500 and 5000 km/s, to study the dependence of the impact parameter and the collision velocity, respectively. For larger giants, we only consider the near head-on collisions with v_ rel=10^4 km/s. The initial parameters of the models are summarized in Table <ref>.
§ RESULT
§.§ Overview
We provide an overview of the evolution of the collision product using our fiducial model, e.g., head-on collision between the two 10 giants. We present in Figure <ref> (from top to bottom) the density ρ, the temperature T, the Mach number ℳ, and the speed in the mid-plane at four different times in our fiducial model.
Initially, the two stars approach at v_ rel≃ 10^4 km/s (1^ st column). Since their first contact, the envelopes are continuously compressed due to the converging motion. Along the contact surface (the pronounced narrow feature across the center in the 2^ nd column, dubbed “shock surface”), pressure gradients are built up and the temperature is raised above 10^7 K due to adiabatic compression. As the later incoming gas collides supersonically with the pressure wall, shocks are created. Some of the very hot gas in the shock surface escapes radially perpendicular to the collision axis (or along the shock surface) with an opening angle of ≃ 30^∘ and speeds of a few thousands km/s, which is not particularly high compared to the rest. At the strongest compression, a significant fraction of the kinetic energy is converted into heat energy (≳ 30%), which is already a few orders of magnitude greater than the total binding energy of the stars. When the pressure gradient exceeds the ram pressure, the compressed gas bounces off and expands quasi-spherically and homologously at supersonic speeds (see 3^ rd and 4^ th column panels in Figure <ref>). On top of the expanding motion, the converted heat energy continuously drives the outer part of the gas cloud to expand by the PdV work, meaning that some of the heat energy is converted back into kinetic energy. At the same time, the outer edge of the cloud supersonically collides with the background medium. This has two effects. First, mass piles up at the boundary between the gas cloud and the background medium, reducing the kinetic energy of the expansion front. Second, shocks are created, which dissipates the kinetic energy of the expansion front to heat energy. As a result of both effects, the expansion front slows down.
§.§ Evolution of expanding cloud - parameter dependence
§.§.§ Fiducial case
To describe the evolution of the expanding gas more quantitatively we show in Figure <ref> the spherically-averaged density ρ and (mass-weighted) temperature T, the expansion speed v^ r, and the area-weighted average of the optical depth τ over the solid angle for our fiducial model as a function of distance from the collision point at five logarithmically sampled times between 1 and 30 days after collision. The density ρ (top-left) and the temperature T (top-right) of the inner regions of the expanding gas cloud are nearly constant. As the cloud expands adiabatically, the overall level of ρ and T drops while maintaining its slope: ρ≃ 10^-8/^3 at t≃ 1 day to 10^-12/^3 at t≃ 30 days, and T≃ 2× 10^5 K at t=1 day to 5× 10^3 K at t≃ 30 days, at which point the cloud is cooler than the background medium. ρ and T outside the flat region decay towards the outer edge with a different steepness: the density drops following a power law of ∝ r^-λ with λ≃ 12-13 upon collision, gradually decreasing to λ≃ 8 at t≃ 30 days. But the temperature decays rather like ∝ r^-1 at 1≲ t ≲ 30 days. The decaying slopes of ρ and T depend on , b, and v_ rel, but the dependence of the slope of T is generally stronger. dlnρ/dln r is almost the same, independent of whereas -dln T/dln r tends to be larger for larger (e.g., λ≃ 2-3 for =100). dln T/dln r is steeper for larger b (e.g., λ≃ 2-3 for b=0.8), while dlnρ/dln r is only slightly less steeper for larger b (e.g., λ≃ 12 for b=0.8). The dependence of the slopes on v_ rel is relatively weak: λ for ρ is almost same for 2500 km/s≤ v_ rel≤ 10^4 km/s and λ for T is slightly larger for smaller v_ rel (e.g., λ≃ 1-1.5 for v_ rel=2500 km/s).
As shown in the bottom-left panel of Figure <ref>, the cloud expands homologously, i.e., v^ r∝ r or constant v^ r at the same mass coordinate, which is also found in all other models. Right after the collision, the maximum expansion velocity at the outer edge is greater than the initial relative velocity by a factor of ≃ 5 and stays constant. The period of time with a constant peak v^ r is very brief for this particular model (≲ 0.1 days). However the constant maximum v^ r phase is longer for collisions with larger , which is illustrated in the bottom-right panel of Figure <ref>. After the constant maximum v^ r phase, the peak expansion velocity continuously decreases due to the interactions with the background medium.
The gas cloud is initially optically thick. The optical depth to the center is τ≳ 10^5 at t≃ 1 day, as demonstrated in the bottom-right panel of Figure <ref>. As it expands and cools, τ decreases following a power-law of t^-7/3 (see the bottom-right panel of Figure <ref>), indicating that the entire cloud will become optically thin within 7 - 8 months, consistent with the analytic estimate by <cit.>. The nearly flat τ inside the cloud indicates that the transition from optically thick to completely optically thin may be prompt.
§.§.§ Comparison between models
To further demonstrate the dependence of the stellar radius R_⋆, the impact parameter b and the initial relative velocity v_ rel, we compare in Figure <ref> the evolution of the same four quantities, shown in Figure <ref> between different models. For a proper comparison, we estimate ρ as the average volume within a distance enclosing 75% of the gas mass[Note that the radius enclosing 75% of the cloud mass corresponds to the radius inside which ρ and T are constant, coinciding with the distance of the cores from the collision point.] and T as the mass-weighted average of T within the same volume. As shown in the top panels, ρ and T decrease over time, following a power-law of t^-3, and t^-1, respectively, almost independently of R_⋆ and b except for T with v_ rel=2.5× 10^3 km/s. The t^-3 power-law for ρ is expected from an homologous expansion: ρ∝ (v^ r t)^-3∝ t^-3. As the t^-1-scaling relation for T suggests, the total (radiation + gas) internal energy at a given mass coordinate decreases like t^-1[Total specific energy = 4σ T^4/[3cρ] + k_ BT/[μ m_ p]∝ t^-1 because T∝ t^-1 and ρ∝ t^-3.]. The significant deviation from the t^-1 power-law for v_ rel=2500 km/s indicates that there is continuous energy exchange between gas at different mass shells. Unlike other cases where the radiation energy is dominant, in this case, the gas internal energy is comparable to the radiation energy and the total internal energy drops like ∝ t^-4/3, resulting in a non-power law decay curve for T. Although each of the two quantities, ρ and T, tends to follow a single power-law, the degree to which their magnitudes depend on R_⋆, b, and v_ rel is different. ρ has a very weak dependence on b and . T is insensitive to b and weakly depends on : only a factor of 1.5 greater for =100 than that =10.
v^ r_ peak stays constant upon collision at (3-6)× v_ rel. The constant-v^ r_ peak phase lasts longer for the case involved with stronger shocks (e.g., larger for given b and v_ rel). Eventually, v^ r_ peak decreases over time because of the interactions with the background medium, following a power-law of t^-1/3 for all models. In particular, the peak expansion speed with varying tends to asymptote to a single value at later times. As b and v_ rel decrease, v^ r_ peak is smaller at a given time. But the difference is at most by a factor of 3 for the collision parameters considered.
As explained for our fiducial model above, the optical depth is initially high at collision, τ> O(10^6). The optical depth for most cases gradually decreases as the gas cloud expands, following a power-law t^-7/3, which is expected from the scaling relations of ρ and v^ r_ peak: τ∝ρ R_ peak∝ t^-3 t^2/3∝ t^-7/3 where R_ peak is the location of the peak expansion speed ≃ v^ r_ peakt∝ t^2/3. The deviation from the t^-7/3 power-law relation becomes more significant as the collisions happen at lower v_ rel and higher b.
§.§.§ Fitting formulae
Combining all the scaling relations, we find that the average density ρ(t), mass-weighted average of temperature T peak expansion velocity v^ r_ peak(t), size of the outer edge R_ peak(t) and radial expansion speed v^ r_ peak(r,t) after t>5 days can be well-described by the following analytic expressions,
ρ(t) = 6×10^-10 g/cm^3(t/1 day)^-3(v_ rel/10^4 km/s)^-3,
T(t) =1.5×10^5 K (t/1 day)^-1tan^-1(√(/10))
for v_ rel≥5000 km/s and b≲ 0.4 ,
τ(t) = 2.5×10^5(t/1 day)^-7/3,
for v_ rel>5000 km/s and b≲ 0.4 ,
v^ r_ peak(t) = 50000 km/s(t/1 day)^-1/3(v_ rel/10^4 km/s)^0.7(b/R_⋆ + 5/5)^-4,
R_ peak(t) =6.5×10^14(t/1 day)^2/3(v_ rel/10^4 km/s)^0.7(b/R_⋆ + 5/5)^-4,
v^ r(r,t) =
v^ r_ peak(r/R_ peak) for r≤ R_ peak,
0 for r> R_ peak,
where the expression for R_ peak is found by analytically integrating v^ r(r,t) over time. Note that ρ decays faster than that expected from the expression 3M_ gas/(4 π R_ peak^3)≃ t^-2 because ρ follows the homologous relation whereas the peak expansion speed slows down so the outer edge expands slower than that expected for homologous expansion.
Note that we do not include the term describing the dependence on in most of the expressions above because of their very weak -dependence. On the other hand, the omission of the v_ rel-dependence in Equation <ref> for T is because of too small number of models with varying v_ rel for reliable fitting. Instead, we have specified the range of v_ rel where the equation is valid.
§.§ Stellar core
The cores move almost synchronously with the bulk of the gas. The orbit of the cores are barely affected by the collision: they remain unbound after collision and move away from each other at a speed almost same as the incoming speed. The distances from the collision point in our fiducial model at five different times are marked with circles in Figure <ref>.
The mass bound to the cores is larger for larger v_ rel and smaller b. But it is overall insignificant. For b≤ 0.2 and v_ rel≥ 5000 km/s, the bound mass is less than 6× 10^-6. It is ≃ 2×10^-3 for the model with v_ rel=2500 km/s and that with b=0.4 and ≃ 3×10^-2 for the model with b=0.8.
§.§ Conversion factor
In this section, we investigate how much heat energy is created in collisions, which is closely related to the amount of energy that can be radiated away and potentially observed. We first define the conversion factor η_ rad as the ratio of the total radiation energy to the initial kinetic energy,
η_ rad(t) =∫ aT(t)^4dV /∫ρ(t=0) v(t=0)^2dV,
where a is the radiation constant and dV is the volume element of each cell. Using η_ rad one can estimate the total radiation energy as ≃ 0.25η_ rad v_ rel^2 for equal-mass collisions. To distinguish gas that initially belonged to the stars from the background gas, we employ a selection condition using a passive scalar. The passive scalar is an artificial scalar quantity initially assigned to each cell which then evolves via advection without affecting the evolution of hydrodynamics quantities. The initial values of the passive scalar of the cells in the stars are one and that of the background cells is zero. So depending on the mass exchange (or mixing) between the cells, the passive scalar varies between zero (vacuum cells) and one (cells originally in the stars). We perform the integration over cells with the passive scalar ≳ 0.1. The value of η_ rad is largely unaffected by the choice of the threshold of the passive scalar, provided that it is greater than 0.
We show η_ rad for all our models in Figure <ref> before the radiation energy in the optically thin gas becomes dominant. It is generally found that η_ rad dramatically increases at collision to η_ rad≃ 0.1- 0.8, meaning a significant fraction of the initial kinetic energy is converted into heat energy. The maximum conversion factors are summarized in Table <ref>. Then as the cloud expands and cools, η_ rad decreases down to ≲ 10^-2.
We see three clear post-peak trends of η_ rad. First, η_ rad is larger when larger stars collide. Additionally, η_ rad is approximately ∝ R_⋆ at any given time: (1-2)≃ 10^-3 for =10, ≃ (3-4)×10^-3 for =20, ≃×10^-2 for R_⋆=50, and ≃ 2× 10^-2 for = 100 at t≃ 3 days. We attribute this positive correlation between η_ rad and R_⋆ to the fact that for the same relative velocity, larger (cooler) stars collide at higher ℳ, resulting in stronger shocks over a wider contact surface (∝ R_⋆). Second, η_ rad is almost the same when b≲ 0.2, while η_ rad begins to decrease with b when b≳ 0.2. This trend is somewhat expected given that as b increases, the mass of gas that is shocked at collision decreases. Lastly, η_ rad decreases with v_ rel because of lower ℳ collisions for given sound speed (i.e., the same star). η_ rad at v_ rel=5000 km/s is almost the same as that at v_ rel=10000 km/s, but η_ rad at v_ rel=2500 km/s is lower by a factor of ≃ 2 than that for our fiducial case. The overall levels of the conversion factors that we obtain are comparable to what <cit.> imposed in order for their analytical model to match with the observed object ZTF19acboexm (see their Figure 9).
We can also define the conversion factor for the ram pressure of gas moving at supersonic speeds,
η_ ram(t) =∫ρ(t) v(t)^2 dV /∫ρ(t=0) v(t=0)^2dV,
where the integration in the denominator is carried out over cells for which the passive scalar >0.1, and that in the numerator the integration is carried out only over cells with supersonic speeds, ℳ≥1. As illustrated in the 3^ rd column panels of Figure <ref> and <ref>, almost all the gas is supersonically expanding. As a result, 1-η_ ram≃η_ rad.
§.§ Observables
We estimate the luminosity L, blackbody radius R_ BB, and temperature T_ BB, using the radiation energy and the local cooling time t_ cool. We first construct a spherical grid with an extremely small opening polar angle (θ≃10^-10 radians) to avoid the singularity at the poles, radially extending out to near the outer boundary of the domain. The grid in the radial direction is logarithmically divided, i.e., constant Δ r/r where Δ r is the cell size at r, while that in the θ and ϕ directions are linearly divided, i.e., constant Δθ and Δϕ. The number of grids in r, θ, and ϕ are (800, 600, 600), which we confirmed to give converging estimates for the observables. We then identify the photosphere at which the optical depth τ≃ 1. τ is integrated along each r-path with the opacity found using an OPAL opacity table for Solar metallicity <cit.>. The photospheric area is,
A_ BB = ∫_0^2π∫_≃ 0^≃π r(τ=1)^2sinθ dr dθ dϕ,
which gives the effective size of the emitting region or blackbody radius R_ BB=(A_ BB/4π)^1/2.
We attempt to bracket the range of realistic radiated luminosity from the collision event by employing two different methods, each of which places different weights on the contribution from the gas cloud layers (the inner regions or outer regions near the photosphere) within the identified photosphere. Our estimates should be accurate at an order-of-magnitude level.
However, for more accurate modeling of light curves, we will carry out detailed non-equilibrium radiation transport calculations in future follow-up work dedicated to estimating light curves and spectra.
In both methods, the total luminosity for each radial path is estimated by summing the contributions from the cells with the local cooling time t_ cool shorter than the evolution time t within the photosphere. Here, t_ cool is defined as h_ρτ(1+u_ gas/u_ rad)/c where h_ρ is the density moment scale height inside the photosphere and u_ rad (u_ gas) the radiation (gas thermal) energy. However, the difference between the two methods is the assumption for how most of the radiation energy is radiated away. In one method, we assume that the total radiation energy within the photosphere is radiated away over a time comparable to the cooling time at the base of the cloud. Under this assumption, the inner regions tend to dominate the luminosity. We first integrate the total radiation energy along the radial path and divide it by the cooling time at the base of the cloud t_ cool,max, i.e., the longest cooling time which is no longer than t, or
L_1 = ∫_0^2π∫_≃ 0^≃π[∫_r(t_ cool=t)^r(τ=1) aT^4 r^2sinθ dr] t_ cool, max(θ,ϕ)^-1 dθ dϕ.
In the second, we assume that the radiation energy of each cell is radiated away over the local cooling time. So the total luminosity is estimated,
L_2= ∫_0^2π∫_≃ 0^≃π∫_r(t_ cool=t)^r(τ=1) aT^4 t_ cool(r,θ,ϕ)^-1 r^2sinθ dr dθ dϕ.
In this method, the outer regions near the photosphere dominate the luminosity. As stressed before, the evolution of the hydrodynamics quantities for optically thin gas (i.e., outer region near the photosphere) in our simulations is intrinsically less accurate than those for optically thick gas. Hence, L_1 should be considered more consistent with our hydrodynamical scheme. We find
that the shapes of the L_1 and L_2 lightcurves are very similar. However, L_1 is consistently smaller than L_2 by a factor of ≃10. For this reason, we present L_1 and the resulting blackbody temperature T_ BB,1=(L_1/σ A_ BB)^1/4 in this section and those from Equation <ref> in Appendix <ref>.
Figure <ref> shows L_1 (top), T_ BB,1 (middle), and R_ BB (middle) as a function of time measured since collision for all our models. Note that the luminosity and the blackbody temperature differ depending on the assumption of radiation (Equations <ref> and <ref>), but R_ BB is independent of the assumption. The luminosity increases dramatically to its peak at collision. The peak luminosity is L_1≳ 10^41-10^43 erg/s (L_2≳ 10^42-10^44), which is higher for larger R_⋆ and smaller b and higher v_ rel, which has the same trend as η_ rad. The temperature at peak is T_ BB, 1≃ 10^5 K. Because T_ BB∝ L^1/4, T_ BB, 2 is greater than T_ BB, 1 by less than a factor of 2. We summarize peak L and T_ BB at peak for all our models in Table <ref>. Subsequently both L and T_ BB, independent of the assumption for the diffusion time (so both L_1 and L_2), decrease following a power-law ∝ t^-ξ with ξ slightly differing at early and late times. L at t≲ 5 days reveals a decaying curve with ξ≃ 0.7-0.8, followed by a slower decay with ξ≃ 0.4 at t≳ 5 days. L therefore decreases by a factor of 10 for the first 5 days. The decay in L for the next 30 days is relatively small, by only by a factor of a few. The change in ξ for T_ BB is very mild: ξ≃ 0.6 at t≲ 5 days and ≃ 0.5 at t≳ 5 days. T_ BB decreases from ≃ (1-2)×10^5 K at collision to 10^4 K at 5-15 days, (4-6)×10^3 K at 30 days. This means the collision will be bright in extreme UV at collision which shifts to optical on a time scale of a month. Lastly, R_ BB increases to ≃ 10^15cm in 30 days, approximately following power-law growth of ∝ t^0.8.
The light curves from our simulations reveal some differences from that analytically predicted by <cit.>. Assuming a constant η comparable to the minimum η_ rad shown in Figure <ref>, their analytic model predicts a peak luminosity consistent with the numerically integrated peak luminosity shown in Figure <ref>. However, the luminosity from their analytic model peaks at a few days after collision and subsequently decays faster. We attribute these discrepancies to the difference in the way of calculating the luminosity. In their analytic model, the luminosity was estimated under the assumption that η does not change over time and the total radiation energy within the gas cloud is radiated away instantaneously on a time scale comparable to the longest possible photon cooling time at any given time (e.g., based on the optical depth to the center). On the other hand, in this work, we take into account the time-dependent contributions (e.g., adiabatic loss of energy
due to expansion) of the cloud.
The observables estimated in this section are driven by stellar collisions. But given the fact that these collisions occur near a SMBH, the expanding gas cloud and the nearby BH would very likely interact, generating a possibly even brighter flare, which we discuss in <ref>.
§ DISCUSSION
§.§ Interaction of gas cloud with interstellar medium
In this work, we simulated BDCs of giants surrounded by a medium with a constant density of 10^-18g/cm^3 and temperature of 10^4 K. As the cloud expands, it collides inelastically with the background medium, which results in the continuous decrease in the kinetic energy of the expansion front. In addition, the collision between the outer edge of the cloud and the background medium can create shocks, converting the kinetic energy into heat energy. The net effect is the deceleration of the gas cloud, deviating from an homologous behavior, which is also found from our simulations where the velocity of the outer edge decreases following t^-1/3. This impact of the surrounding medium would be faster if the colliding stars were initially embedded in a denser medium. For example, the rising slope of η_ rad would be less steep for the case with lower-density background gas. Given the supersonic motion of the cloud, how the cloud expands would not be significantly affected by the temperature of the background medium for a given background density. However the evolution of η_ rad would be changed depending on the background temperature. In fact, we performed extra simulations with different background temperatures (100 -5000 K), showing that while the expansion properties of the cloud (e.g., ρ, T, and v_ peak^ r) are almost independent of the background temperature, η_ rad tends to be lower at the local minimum and increases more slowly afterward for a lower background temperature.
Although the deviation from an homologous expansion was only found near the outer edge for the duration of our simulations, as an order-of-magnitude estimate, the motion of the entire gas cloud would become completely deviated from an homologous expansion when the swept-up mass is comparable to the mass of the cloud,
t_ non-homologous ≃3M_ gas/4πρ_ ISM (v^ r)^3,
≃ 1100 days(M_ gas/2)(ρ_ ISM/10^-18/^3)^-1(v^ r/10^4 km/s),
where v^ r is the expansion speed and ρ_ ISM the density of the background medium. At the same time, this also means that the scaling relations for the homologous expansion found from our simulations would be applied to the evolution of the homologously expanding part of the collision product, independent of the existence of the background medium.
§.§ Interaction of gas cloud with supermassive black hole
In addition to the burst caused by the stellar collision (see <ref>), there would be a subsequent burst due to accretion onto the nearby SMBH. As a result, the overall shape of the luminosity would be that the stellar collision creates the first peak with L≳ 10^42 erg/s which decays, followed by a sharp rise to Eddington due to accretion onto the BH, possibly remaining at that level for up to years until the captured gas is accreted onto the BH. We will examine the observables from the BH-cloud interaction by considering two cases, 1) Case 1. no-decelerating expansion (<ref>) and 2) Case 2. decelerating expansion (<ref>). And we will discuss the astrophysical implications for BHs in <ref>.
§.§.§ Case 1. no-decelerating expansion
We first assume that the entire gas cloud expands homologously and the expansion speed of the outer edge is
v^ r≃ψ v_ rel with ψ≃ 3-6 (see Figure <ref>). The gas cloud starts to interact with the BH when the size of the expanding gas cloud becomes comparable to the distance to the BH R_ BB for given v_ rel,
R_ BH≃G/v_ rel^2 = 10^15 cm(/10^7)(v_ rel/10^4 km/s)^-2.
The time difference between the first collision-driven burst and the subsequent accretion-driven burst would be set by the time τ_ BH at which the outer edge of the cloud reaches the BH, R_ BH - R_ Sch≃ R_ BH≃ R_ peak, where R_ Sch is the Schwarzschild radius,
τ_ BH≃R_ BH/v^ r≃ 3 days(/10^7)(v_ rel/10^4 km/s)^-3.
To zeroth order, the part of the cloud that is within the Bondi radius R_ Bondi≃ 2G/(v^ r)^2 from the BH would be gravitationally captured by the BH and subsequently accreted onto the BH. Assuming a Bondi–Hoyle accretion <cit.>, the luminosity L_ Bondi with radiative efficiency ϵ can be estimated,
L_ Bondi≃4πϵ G^2^2ρ c^2/(v^ r)^3,
≃3× 10^47 erg /s(ϵ/0.1)(/10^7)^-1(v_ rel/10^4 km/s)^3,
which is super-Eddington for < 3× 10^8 (v_ rel/10^4 km s^-1)^1.5. Note that L_ Bondi has no dependence on t given the scaling relations for ρ (∝ t^-3, Equation <ref>) and v^ r(r=R_ BH) (∝ t^-1, Equation <ref>): L_ Bondi∝ρ (v^ r)^-3∝ t^0. Super-Eddington accretion may be possible if the gas is optically thick and the trapping radius R_ tr=(L_ Bondi/L_ Edd)(G M_∙/ϵ c^2) is smaller than the Bondi radius <cit.>. The ratio of the two radii is,
R_ tr/R_ Bondi≃ 300(t/1 day)^-2(v_ rel/10^4 km/s)^-1,
suggesting super-Eddington accretion would be possible at t≳ 20 days(v_ rel/10^4 km s^-1)^-0.5. Here, we caution that the time ratio in Equation <ref> is estimated under the assumption that the global accretion flow is not affected by any accretion feedback, which is highly uncertain. Assuming a black body, the temperature at the Bondi radius if L≃ L_ Edd is,
T_ Bondi(t)≃ 10^5 K (t/3 day)^-1(/10^7)^3/4(v_ rel/10^4 km/s)^-1/2,
and at the onset of the BH-gas interaction (or t≃τ_ BH),
T_ Bondi(t=τ_ BH)≃ 10^5 K (ψ/5)(/10^7)^-1/4(v_ rel/10^4 km/s).
If L≃ L_ Bondi,
T_ Bondi ≃ 2.4×10^4 K (t/150 day)^-1(ϵ/0.1)^1/4
×(/5× 10^8)^1/4(v_ rel/10^4 km/s)^-5/4,
and at t=τ_ BH,
T_ Bondi(t=τ_ BH) ≃ 2.4×10^4 K (ψ/5)(ϵ/0.1)^1/4
×(/5× 10^8)^-3/4(v_ rel/10^4 km/s)^-7/4.
Because R_ Bondi increases faster than R_ peak,
R_ Bondi/R_ peak∝ t,
as the most optimistic case, the entire gas cloud could be ultimately captured by the BH in a time τ_ capture at which R_ Bondi≃ R_ peak,
τ_ capture≃ 40 days(ψ/5)(/10^7)(v_ rel/10^4 km/s)^-3.
Then the maximum duration of the Eddington luminosity may be set by,
τ_ acc≲M_ gasϵ c^2/L_ Edd≃ 9 years(ϵ/0.1)(M_ gas/2)(/10^7)^-1,
where M_ gas is the mass of the gas cloud, i.e., total mass of the two collided stars. Here, we assumed that the entire gas would be accreted onto the BH. However, radiation pressure from super-Eddington accretion would be strong enough to generate outflow. For such a case, only a fraction of the gas cloud would end up accreting and τ_ acc would be shorter than estimated above.
§.§.§ Case 2. decelerating expansion
Now we examine the observables from interactions between decelerating expanding cloud with v^ r_ peak∝ t^-1/3 and the SMBH, using Equations <ref>–<ref>. For this case, τ_ BH has a different dependence on and v_ rel,
τ_ BH≃ 3 days(/10^7)^3/2(v_ rel/10^4 km/s)^-3(b/R_⋆ + 5/5)^6,
We show in Figure <ref> the range of τ_ BH for three different collision velocities v_ rel as a function of assuming a non-decelerating expansion speed (thick diagonal bars, ψ=3-6) and a decelerating expansion speed (solid lines). The interaction onset time would be longer generally if the expansion of the cloud slows down. Depending on M_ BH and v_ rel, the second burst could happen over a wide range of time. For example, if a collision with v_ rel≳ 2500 km/s occurs in the Galactic center <cit.>, the second accretion-driven burst would occur after the collision in less than a day to 6-7 months depending on the location of the collision from the BH. For very massive black holes (M_ BH>10^8), τ_ BH can be more than tens of years.
The Bonding luminosity is still independent of t and has the same - and v_ rel-dependence as the case with the no-decelerating expansion, but it is roughly a factor of 3 greater at given M_ BH and v_ rel,
L_ Bondi≃10^48 erg /s(ϵ/0.1)(/10^7)^-1(v_ rel/10^4 km/s)^3,
which is further illustrated in Figure <ref>. While the expression for the blackbody temperature at the Bondi radius has the same dependence on and v_ vel as Equations <ref> and <ref>, because of the different expression for τ_ BH, T_ Bondi(t=τ_ BH) is written differently,
T_ Bondi(t=τ_ BH)≃
8×10^4K (/10^7)^-3/4(v_ rel/10^4 km/s)^2.1(b/R_⋆ + 5/5)^-6,
for L=L_ Edd,
6×10^3K (ϵ/0.1)^1/4(/10^7)^-5/4(v_ rel/10^4 km/s)^2.8(b/R_⋆ + 5/5)^-6,
for L=L_ Bondi.
We compare T_ Bondi at the onset of the accretion-drive burst (so T_ Bondi at t=τ_ BH) in Figure <ref> between the non-decelerating expansion case (thick bars) and the decelerating expansion case (lines). For low-mass black holes, T_ Bondi is quite similar, e.g., 10^5 K for =10^5-10^6. However, because of a steeper decline for the decelerating expansion case (T_ Bondi∝^-3/4-^-5/4, Equation <ref>) than for the no-decelerating expansion case (T_ Bondi∝^-1/4-^-3/4, Equations <ref> and <ref>), T_ Bondi for the decelerating expansion case is generally lower for high-mass BHs: for =10^9, T_ Bondi≃ 10 - 10^3 K for the decelerating expansion case whereas T_ Bondi≃ 10^3-10^4 K for the no-decelerating expansion case.
For the decelerating expansion case, the Bondi radius increases faster,
R_ Bondi/R_ peak∝ t^4/3,
which leads to a smaller τ_ capture,
τ_ capture≃ 11 days(/10^7)^3/4(v_ rel/10^4 km/s)^-2.5(b/R_⋆ + 5/5)^-3.
The duration of the accretion process would be the same as Equation <ref>.
§.§.§ Astrophysical implication for black holes
The possibility of the accretion of at least some fraction of the expanding cloud onto the SMBH in proximity can have significant implications for the growth of BHs in the cosmic landscape. While several mechanisms for massive BH formation have been proposed, the precise mechanism for growing BH seeds at extremely high redshifts remains uncertain <cit.>. The proposed mechanisms include 1) rapid growth of the remnants of the Population III stars via super-Eddington accretion <cit.>, the direct collapse of supermassive self-gravitating objects <cit.>, growth of BHs in a runaway process <cit.>. In principle, as long as a BH is more massive than colliding stars, the velocity of stars around the BH can be large enough that stellar collisions can be completely disruptive. Hence, the accretion of gas produced in stellar collisions onto a nearby BH can provide another venue for the growth of stellar-mass BHs to massive BHs, in particular see BHs at high redshift.
However, disruptive collisions are not the only growth mechanism for BHs in stellar-dense environments. We show in Figure <ref> the regions around BHs in which several events possibly contributing to their growth, i.e., disruptive collisions, tidal disruption events, BH-star collisions, and direct captures by BHs, can occur. When the distance from the BH is less than a few times greater than the Schwarzschild radius r_ Sch = 2 G/c^2 (dubbed “direct capture” radius), the star would directly fall into the BH (e.g., r<2r_ Sch for parabolic orbits). If the closest approach distance between the BH and a star is smaller than the stellar radius, r≲, they collide, during which the BH would gravitationally capture a fraction of the star and accrete. When a star orbits at a distance greater than both the stellar radius and the direct capture radius, and smaller than the so-called tidal radius, r_ t = (M_∙/M_⋆)^1/3R_⋆, very strong BH's tidal forces disrupt the star, creating debris, some of which would end up accreting onto the BH. This event is called tidal disruption event <cit.>. Finally, the region for disruptive collisions between giants may be characterized by the two distances, the distance within which the Keplerian velocity around the BH exceeds the stellar escape speed, r_ collision=(M_∙/M_⋆)R_⋆ and the tidal radius.
As shown in Figure <ref>, all four star-destroying events can contribute to the growth of stellar-mass and intermediate-mass BHs. However, for SMBHs with r_ Sch>, only three events, namely, disruptive collisions, tidal disruptions, and direct captures, can feed the BHs. For very massive BHs (e.g., M_∙>10^9), disruptive collisions would be the dominant and likely only observable transient among those considered here that lead to the mass growth of the BHs <cit.>.
This has an interesting implication for the detection of dormant BHs. TDEs have been considered a unique signpost for the existence of dormant SMBHs. However, because there is a maximum BH mass capable of disrupting stars, M_∙ at which r_ t equals to the direct capture radius, TDEs can not be used to detect very massive quiescent black holes. However, disruptive stellar collisions can occur near BHs at all mass scales, which would make these events a promising tool to probe the existence of very massive dormant BHs which cannot be probed by other transients. In particular, if the luminosity due to the interaction of the collision product with the BH is Eddington-limited, an inference of the BH mass would potentially be possible from the observed radiated light curve.
Which type of the events is dominant at different mass ranges would depend on the stellar density, the accretion efficiency and occurrence rates, which is beyond the scope of our paper. We will examine this aspect in more detail in our future work.
§.§ Particle acceleration
In this work we have conducted numerical hydrodynamical simulations that confirm, following stellar collision event, the formation of strong shocks. These shocks arise due to the high velocity of the outflow, and its impact with the surrounding ISM in the galactic nucleus environment. These shock waves subsequently compress and heat the surrounding ISM gas.
The shocks formed in these stellar collisions provide an environment highly conducive to efficient particle acceleration. As particles interact with the turbulent magnetic fields expected close to the shock front, they can gain a significant fraction of the free energy available from the differential flow speeds (in the shock's rest frame, the upstream towards the shock with velocity V and the downstream moves away from the shock at velocity V/4). This process of diffusive particle acceleration at shocks, an example of first order Fermi acceleration, is expected to result in the generation of a power-law spectrum of non-thermal particles up to very high energies <cit.>.
A fraction of the energy in the accelerated particle population produced by stellar collisions will subsequently be radiated via non-thermal emission through various energy loss processes <cit.>.
For instance, the accelerated electrons will produce synchrotron radiation as they spiral around the magnetic fields also generated during the collision. This emission is expected to be detectable in the radio, and potentially the X-ray, bands. In addition, the interaction between accelerated protons and the surrounding gas can generate gamma-ray emission through processes like inelastic proton-proton collisions.
The non-thermal radiation emitted by the accelerated particles produced in stellar collisions offers valuable diagnostics into the physical processes at play during violent stellar collision events. By analyzing the observed non-thermal radiation, we can gain a clearer understanding of shock front environment. Ultimately, these insights will elucidate on the dynamics of the collision itself. Our numerical hydrodynamics simulations, coupled with theoretical estimates for the production of non-thermal particles not included in our numerical description, provide can provide insights into particle acceleration in stellar collisions. This will be addressed elsewhere in a separate work.
§ CONCLUSION AND SUMMARY
In this work, we investigate the hydrodynamics of black hole-drive disruptive collisions (BDCs) between giants in galactic nuclei and their observational signatures using two state-of-the-art codes, the 3D moving-mesh hydrodynamics code AREPO and the 1D stellar evolution code MESA. The initial conditions of our simulations involved two identical 1 giants with different radii, initial relative speeds, and impact parameters. This work complements to the analytical calculations presented by <cit.>, which is generally consistent with each other. We improve the estimates of the events' observables by accurately taking into account the realistic stellar internal structure and non-linear hydrodynamics effects.
When two stars collide with exceedingly large kinetic energy, very strong shocks are created along the contact surface. The two stars are fully destroyed and merged into an homologously, quasi-spherically, and supersonically expanding gas cloud. The maximum expansion speed of the cloud is larger than the initial relative velocity of the stars by a factor of 3-6. The expansion speed at a given mass coordinate stays the same, but the outer edge of the cloud slows down because of the interaction with the background medium. As it expands, the overall level of its density and temperature drops following a power-law ∝ t^-3 and ∝ t^-1, respectively, becoming optically thin within a few hundred days. At any given time of evolution up to 30 days, the density and temperature of the inner regions of the cloud remain relatively constant, rapidly decaying towards the outer edge, following a power-law: ρ(r)∝ r^-8-r^-12 and T(r)∝ r^-1 - r^-2. These quantities exhibit weak dependencies on the stellar radius within 10-100 and the impact parameter within b≃ 0.4. But the dependence on the collision velocity is relatively strong. We provide fitting formulae for the average cloud density, temperature, maximum expansion speed, and optical depth (Equations <ref>- <ref>), which would be useful for analytic estimates for these BDCs.
One of the key findings of our study is to numerically estimate the amount of radiation energy converted from the initial kinetic energy, which plays a crucial role in determining the observable properties of the collisions. The overall trend of the conversion efficiency, defined as the ratio of the converted radiation energy to the initial kinetic energy, is such that it peaks at ≳ 0.1 at collision, decays to 10^-4 - 10^-2 within 10 days, and then gradually increases. The efficiency reaches 10^-2 - 10^-1 in one month since the collision. But its magnitude depends on various factors, including the stellar radius, impact parameter, and collision velocity. More specifically, a collision between larger stars colliding at a higher speed with a smaller impact parameter tends to result in greater conversion efficiency.
We estimate the luminosity, the blackbody radius, and the blackbody temperature, using the converted radiation energy and local cooling time within the gas cloud. The peak luminosity can reach values exceeding 10^42 erg/s and exhibits the similar dependence wtih the conversion efficiency. Over time, the luminosity decays following a power-law of t^-0.8 at early times and t^-0.4 after 10 days since collision. The blackbody radius increases almost linearly with time (∝ t^0.8), while the temperature decreases, following a power-law of t^-0.5-t^-0.6. The collision events would initially produce bursts of extreme ultraviolet (≃10eV) gradually shifting to optical (≃ 0.1eV), with temporal evolution spanning from days to weeks. These events can be observed by ongoing (e.g., ZTF [https://www.ztf.caltech.edu], ASSA-SN [https://www.astronomy.ohio-state.edu/asassn]) and future (e.g., LSST [https://www.lsst.org] and ULTRASAT [https://www.weizmann.ac.il/ultrasat]) surveys. More detailed radiation transport calculations will be carried out in our follow-up project, with which the detection rate for each survey will be estimated.
In addition to the burst resulting from the stellar collision itself, a subsequent burst occurs due to the accretion of the gas cloud onto the supermassive black hole in the galactic center in 5(M_∙/10^7) days for v_ rel=10^4 km/s since collision. Assuming Bondi accretion, the accretion luminosity can easily exceed the Eddington limit as well as the luminosity from the stellar collision. Because the Bondi radius expands faster than the gas cloud, the entire cloud would be gravitationally captured in the black holes's potential in 11(M_∙/10^7)^3/4 days and subsequently accrete onto the black hole. It would take ≲ 9 (M_∙/10^7)^-1 years if the entire cloud was accreted. Therefore, the overall luminosity curve would include a peak from the collision event, followed by a rise to the Eddington luminosity. This heightened luminosity can be sustained for up to 10 years.
Although the estimate of the time scales and luminosity due to gas-black hole interactions are still on the order-of-magnitude level, this aspect indicates very important implications. The possibility of the gas accretion onto the black hole at all mass scales in proximity subsequently after the collision suggests that the collision can provide another mechanism for black hole growth. Tidal disruption events have been proposed as a tool to detect dormant black holes, mostly up to 10^8. However, because disruptive stellar collisions can occur near very massive dormant ones (>10^9), such collisions can be a potentially promising tool to probe the existence of very massive dormant black holes.
Finally we demonstrate the conversion of kinetic energy into radiation energy, providing insights into the efficiency of particle acceleration in these collisions. The resulting bursts of ultraviolet and optical emission indicate the generation of high-energy particles, highlighting the importance of particle acceleration processes in understanding the observational signatures of such events.
While this study, to our knowledge, is the first detailed hydrodynamics calculations of BDCs between giants, there are a few caveats in our modelling that will be improved in our future work. First, the assumption for local thermodynamic equilibrium is only valid for optically thick gas. This means the evolution of the collision product at early times is accurate, but as the gas cloud becomes optically thin, our treatment of radiation pressure becomes inaccurate. As remarked in <ref>, This would affect the shape of the lightcurves at late times. We will perform detailed non-equilibrium radiation transport calculations for the late time evolution in our follow-up project using our hydrodynamics calculations at early times when our assumption for local thermodynamic equilibrium is valid. This will significantly improve the light curve modelling. Second, there are several physical impacts that we have not considered yet, such as, magnetic fields, recombination, and the existence of non-thermal particles. Using the machinery that we built for this work, We will explore their impacts in a series of studies dedicated to investigating the impact of each physics.
BDCs will offer insights into many astrophysical aspects that can not be provided by other transients, such as the stellar dynamics and potential particle acceleration in galactic nuclei and globular clusters, black hole growth, and detection of dormant black holes.
§ ACKNOWLEDGEMENTS
TR is grateful to Luc Dessart and Re'em Sari for constructive comments on the manuscript, Hans-Thomas Janka for fruitful discussions for similarities and dissimilarities of these events with core-collapse supernovae, and Ruggero Valli for providing an input file for creating the giants used for the simulations.
This research project was conducted using computational resources (and/or scientific computing services) at the Max-Planck Computing & Data Facility. The authors gratefully acknowledge the scientific support and HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) under the NHR project b166ea10. NHR funding is provided by federal and Bavarian state authorities. NHR@FAU hardware is partially funded by the German Research Foundation (DFG) – 440719683. In addition, some of the simulations were performed on the national supercomputer Hawk at the High Performance Computing Center Stuttgart (HLRS) under the grant number 44232. PAS acknowledges the funds from the “European Union NextGenerationEU/PRTR”,
Programa de Planes Complementarios I+D+I (ref. ASFAE/2022/014).
§ DATA AVAILABILITY
Any data used in this analysis are available on reasonable request from the first author.
mnras
@urlcharsothermakeother $&#_%
@doi@urlcharsother ifnextchar [ @doi@
@doi@[]
@doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1
@eprint#1#2@eprint@#1:#2::nil
@eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1
@eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml
dblp:#1
@eprint@#1:#2:#3:#4niltempa #1tempb #2tempc
#3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined
mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc
[Alexander & HopmanAlexander &
Hopman2009]AlexanderHopman2009
Alexander T., Hopman C., 2009, @doi []
10.1088/0004-637X/697/2/1861, https://ui.adsabs.harvard.edu/abs/2009ApJ...697.1861A 697, 1861
[Amaro SeoaneAmaro
Seoane2023a]AmaroSeoane2023b
Amaro Seoane P., 2023a, arXiv e-prints, https://ui.adsabs.harvard.edu/abs/2023arXiv230710330A p. arXiv:2307.10330
[Amaro SeoaneAmaro
Seoane2023b]AmaroSeoane2023
Amaro Seoane P., 2023b, @doi [] 10.3847/1538-4357/acb8b9, https://ui.adsabs.harvard.edu/abs/2023ApJ...947....8A 947, 8
[Bahcall & WolfBahcall &
Wolf1976]BahcallWolf1976
Bahcall J. N., Wolf R. A., 1976, @doi [] 10.1086/154711, https://ui.adsabs.harvard.edu/abs/1976ApJ...209..214B 209, 214
[Balberg & YassurBalberg &
Yassur2023]BalbergYassur2023
Balberg S., Yassur G., 2023, @doi [arXiv e-prints]
10.48550/arXiv.2305.04997, https://ui.adsabs.harvard.edu/abs/2023arXiv230504997B p. arXiv:2305.04997
[BegelmanBegelman1979]Begelman1979
Begelman M. C., 1979, @doi [] 10.1093/mnras/187.2.237, https://ui.adsabs.harvard.edu/abs/1979MNRAS.187..237B 187, 237
[BellBell1978]1978MNRAS.182..147B
Bell A. R., 1978, @doi [] 10.1093/mnras/182.2.147, https://ui.adsabs.harvard.edu/abs/1978MNRAS.182..147B 182, 147
[Bellm et al.,Bellm et al.2019]ZTF
Bellm E. C., et al., 2019, @doi [] 10.1088/1538-3873/aaecbe, https://ui.adsabs.harvard.edu/abs/2019PASP..131a8002B 131, 018002
[Benz & HillsBenz &
Hills1987]BenzHills1987
Benz W., Hills J. G., 1987, @doi [] 10.1086/165857, https://ui.adsabs.harvard.edu/abs/1987ApJ...323..614B 323, 614
[Benz & HillsBenz &
Hills1992]BenzHills21992
Benz W., Hills J. G., 1992, @doi [] 10.1086/171230, https://ui.adsabs.harvard.edu/abs/1992ApJ...389..546B 389, 546
[Blandford & OstrikerBlandford &
Ostriker1978]1978ApJ...221L..29B
Blandford R. D., Ostriker J. P., 1978, @doi [] 10.1086/182658,
https://ui.adsabs.harvard.edu/abs/1978ApJ...221L..29B 221, L29
[BondiBondi1952]Bondi1952
Bondi H., 1952, @doi [] 10.1093/mnras/112.2.195, https://ui.adsabs.harvard.edu/abs/1952MNRAS.112..195B 112, 195
[Bondi & HoyleBondi &
Hoyle1944]BondiHoyle1944
Bondi H., Hoyle F., 1944, @doi [] 10.1093/mnras/104.5.273,
https://ui.adsabs.harvard.edu/abs/1944MNRAS.104..273B 104, 273
[Choi, Dotter, Conroy, Cantiello,
Paxton & JohnsonChoi et al.2016]Choi+2016
Choi J., Dotter A., Conroy C., Cantiello M., Paxton B.,
Johnson B. D., 2016, @doi [] 10.3847/0004-637X/823/2/102, https://ui.adsabs.harvard.edu/abs/2016ApJ...823..102C 823, 102
[Colpi & DottiColpi &
Dotti2011]ColpiDotti2011ASL.....4..181C
Colpi M., Dotti M., 2011, @doi [Advanced Science Letters]
10.1166/asl.2011.1205, https://ui.adsabs.harvard.edu/abs/2011ASL.....4..181C 4, 181
[Dale & DaviesDale &
Davies2006]DaleDavies2006
Dale J. E., Davies M. B., 2006, @doi []
10.1111/j.1365-2966.2005.09937.x, https://ui.adsabs.harvard.edu/abs/2006MNRAS.366.1424D 366, 1424
[Devecchi, Volonteri, Rossi, Colpi &
Portegies ZwartDevecchi et al.2012]Devecchi+2012
Devecchi B., Volonteri M., Rossi E. M., Colpi M., Portegies
Zwart S., 2012, @doi [] 10.1111/j.1365-2966.2012.20406.x, https://ui.adsabs.harvard.edu/abs/2012MNRAS.421.1465D 421, 1465
[Freitag & BenzFreitag &
Benz2005]FreitagBenz2005
Freitag M., Benz W., 2005, @doi []
10.1111/j.1365-2966.2005.08770.x, https://ui.adsabs.harvard.edu/abs/2005MNRAS.358.1133F 358, 1133
[GRAVITY Collaboration et al.,GRAVITY
Collaboration et al.2019]GRAVITY
GRAVITY Collaboration et al., 2019, @doi []
10.1051/0004-6361/201935656, https://ui.adsabs.harvard.edu/abs/2019A A...625L..10G 625, L10
[Gillessen et al.,Gillessen
et al.2019]Gillessen+2019
Gillessen S., et al., 2019, @doi [] 10.3847/1538-4357/aaf4f8, https://ui.adsabs.harvard.edu/abs/2019ApJ...871..126G 871, 126
[Haiman & LoebHaiman &
Loeb2001]HaimanLeob2001
Haiman Z., Loeb A., 2001, @doi [] 10.1086/320586, https://ui.adsabs.harvard.edu/abs/2001ApJ...552..459H 552, 459
[HerwigHerwig2000]herwig_evolution_2000
Herwig F., 2000, @doi [A&A] 10.48550/arXiv.astro-ph/0007139, 360, 952
[HillsHills1988]Hills1988
Hills J. G., 1988, @doi [] 10.1038/331687a0, https://ui.adsabs.harvard.edu/abs/1988Natur.331..687H 331, 687
[Hills & DayHills &
Day1976]HillsDay1976
Hills J. G., Day C. A., 1976, , https://ui.adsabs.harvard.edu/abs/1976ApL....17...87H 17, 87
[Hut et al.,Hut et al.1992]Hut1992
Hut P., et al., 1992, @doi [] 10.1086/133085, https://ui.adsabs.harvard.edu/abs/1992PASP..104..981H 104, 981
[Iglesias & RogersIglesias &
Rogers1996]OPAL
Iglesias C. A., Rogers F. J., 1996, @doi [] 10.1086/177381,
https://ui.adsabs.harvard.edu/abs/1996ApJ...464..943I 464, 943
[Inayoshi, Visbal & HaimanInayoshi
et al.2020]Inayoshi+2020
Inayoshi K., Visbal E., Haiman Z., 2020, @doi []
10.1146/annurev-astro-120419-014455, https://ui.adsabs.harvard.edu/abs/2020ARA A..58...27I 58, 27
[Ivezić et al.,Ivezić
et al.2019]LSST
Ivezić Ž., et al., 2019, @doi []
10.3847/1538-4357/ab042c, https://ui.adsabs.harvard.edu/abs/2019ApJ...873..111I 873, 111
[Kochanek et al.,Kochanek
et al.2017]ASASSN
Kochanek C. S., et al., 2017, @doi [] 10.1088/1538-3873/aa80d9,
https://ui.adsabs.harvard.edu/abs/2017PASP..129j4502K 129, 104502
[Lai, Rasio & ShapiroLai
et al.1993]Lai+1993
Lai D., Rasio F. A., Shapiro S. L., 1993, @doi []
10.1086/172946, https://ui.adsabs.harvard.edu/abs/1993ApJ...412..593L 412, 593
[Langer, Fricke & SugimotoLanger
et al.1983]langer_semiconvective_1983
Langer N., Fricke K. J., Sugimoto D., 1983, A&A, https://ui.adsabs.harvard.edu/abs/1983A A...126..207L 126, 207
[LedouxLedoux1947]ledoux_stellar_1947
Ledoux W. P., 1947, @doi [ApJ] 10.1086/144905, 105, 305
[Lupi, Haardt, Dotti, Fiacconi, Mayer
& MadauLupi et al.2016]Lupi+2016
Lupi A., Haardt F., Dotti M., Fiacconi D., Mayer L., Madau
P., 2016, @doi [] 10.1093/mnras/stv2877, https://ui.adsabs.harvard.edu/abs/2016MNRAS.456.2993L 456, 2993
[Matthews, Bell & BlundellMatthews
et al.2020]Matthews+2020
Matthews J. H., Bell A. R., Blundell K. M., 2020, @doi []
10.1016/j.newar.2020.101543, https://ui.adsabs.harvard.edu/abs/2020NewAR..8901543M 89, 101543
[Neumayer, Seth & BökerNeumayer
et al.2020]Neumayer+2020
Neumayer N., Seth A., Böker T., 2020, @doi []
10.1007/s00159-020-00125-0, https://ui.adsabs.harvard.edu/abs/2020A ARv..28....4N 28, 4
[Ohlmann, Röpke, Pakmor &
SpringelOhlmann et al.2017]Ohlmann+2017
Ohlmann S. T., Röpke F. K., Pakmor R., Springel V., 2017,
@doi [] 10.1051/0004-6361/201629692, https://ui.adsabs.harvard.edu/abs/2017A A...599A...5O 599, A5
[Omukai & NishiOmukai &
Nishi1998]OmukaiNishi1998
Omukai K., Nishi R., 1998, @doi [] 10.1086/306395, https://ui.adsabs.harvard.edu/abs/1998ApJ...508..141O 508, 141
[Orlando, Miceli, Ustamujic, Tutone,
Greco, Petruk, Bocchino & PeresOrlando
et al.2021]Orlando+2021
Orlando S., Miceli M., Ustamujic S., Tutone A., Greco E.,
Petruk O., Bocchino F., Peres G., 2021, @doi []
10.1016/j.newast.2020.101566, https://ui.adsabs.harvard.edu/abs/2021NewA...8601566O 86, 101566
[Pakmor, Springel, Bauer, Mocz,
Munoz, Ohlmann, Schaal & ZhuPakmor et al.2016]ArepoHydro
Pakmor R., Springel V., Bauer A., Mocz P., Munoz D. J.,
Ohlmann S. T., Schaal K., Zhu C., 2016, @doi []
10.1093/mnras/stv2380, https://ui.adsabs.harvard.edu/abs/2016MNRAS.455.1134P 455, 1134
[Paxton, Bildsten, Dotter, Herwig,
Lesaffre & TimmesPaxton et al.2011]Paxton+2011
Paxton B., Bildsten L., Dotter A., Herwig F., Lesaffre P.,
Timmes F., 2011, @doi [] 10.1088/0067-0049/192/1/3, http://adsabs.harvard.edu/abs/2011ApJS..192....3P 192, 3
[Paxton et al.,Paxton
et al.2013]paxton:13
Paxton B., et al., 2013, @doi [] 10.1088/0067-0049/208/1/4, http://adsabs.harvard.edu/abs/2013ApJS..208....4P 208, 4
[Preto & Amaro-SeoanePreto &
Amaro-Seoane2010]PretoAmaroSeoane2010
Preto M., Amaro-Seoane P., 2010, @doi []
10.1088/2041-8205/708/1/L42, https://ui.adsabs.harvard.edu/abs/2010ApJ...708L..42P 708, L42
[RauchRauch1999]Rauch+1999
Rauch K. P., 1999, @doi [] 10.1086/306953, https://ui.adsabs.harvard.edu/abs/1999ApJ...514..725R 514, 725
[ReesRees1988]Rees1988
Rees M. J., 1988, @doi [] 10.1038/333523a0, https://ui.adsabs.harvard.edu/abs/1988Natur.333..523R 333, 523
[ReimersReimers1975]Reimers+1975
Reimers D., 1975, in , Problems in stellar atmospheres and envelopes..
pp 229–256
[Rizzuto, Naab, Rantala, Johansson,
Ostriker, Stone, Liao & IrodotouRizzuto
et al.2023]Rizzuto+2023
Rizzuto F. P., Naab T., Rantala A., Johansson P. H., Ostriker
J. P., Stone N. C., Liao S., Irodotou D., 2023, @doi []
10.1093/mnras/stad734, https://ui.adsabs.harvard.edu/abs/2023MNRAS.521.2930R 521, 2930
[Rose, Naoz, Gautam, Ghez, Do, Chu
& BecklinRose et al.2020]Rose+2020
Rose S. C., Naoz S., Gautam A. K., Ghez A. M., Do T., Chu D.,
Becklin E., 2020, @doi [] 10.3847/1538-4357/abc557, https://ui.adsabs.harvard.edu/abs/2020ApJ...904..113R 904, 113
[Rose, Naoz, Sari & LinialRose
et al.2023]Rose+2023
Rose S. C., Naoz S., Sari R., Linial I., 2023, @doi [arXiv
e-prints] 10.48550/arXiv.2304.10569, https://ui.adsabs.harvard.edu/abs/2023arXiv230410569R p. arXiv:2304.10569
[Ryu, Tanaka, Perna & HaimanRyu
et al.2016]Ryu+2016
Ryu T., Tanaka T. L., Perna R., Haiman Z., 2016, @doi []
10.1093/mnras/stw1241, https://ui.adsabs.harvard.edu/abs/2016MNRAS.460.4122R 460, 4122
[Sassano, Capelo, Mayer, Schneider &
ValianteSassano et al.2023]Sassano+2023
Sassano F., Capelo P. R., Mayer L., Schneider R., Valiante R.,
2023, @doi [] 10.1093/mnras/stac3608, https://ui.adsabs.harvard.edu/abs/2023MNRAS.519.1837S 519, 1837
[Shvartzvald et al.,Shvartzvald
et al.2023]ULTRASAT
Shvartzvald Y., et al., 2023, @doi [arXiv e-prints]
10.48550/arXiv.2304.14482, https://ui.adsabs.harvard.edu/abs/2023arXiv230414482S p. arXiv:2304.14482
[SpringelSpringel2010]Arepo
Springel V., 2010, @doi [] 10.1111/j.1365-2966.2009.15715.x,
https://ui.adsabs.harvard.edu/abs/2010MNRAS.401..791S 401, 791
[Stone, Küpper & OstrikerStone
et al.2017]Stone+2017
Stone N. C., Küpper A. H. W., Ostriker J. P., 2017, @doi
[] 10.1093/mnras/stx097, https://ui.adsabs.harvard.edu/abs/2017MNRAS.467.4180S 467, 4180
[Tagawa, Haiman & KocsisTagawa
et al.2020]Tagawa+2020
Tagawa H., Haiman Z., Kocsis B., 2020, @doi []
10.3847/1538-4357/ab7922, https://ui.adsabs.harvard.edu/abs/2020ApJ...892...36T 892, 36
[Volonteri & ReesVolonteri &
Rees2005]Volonteri+2005
Volonteri M., Rees M. J., 2005, @doi [] 10.1086/466521, https://ui.adsabs.harvard.edu/abs/2005ApJ...633..624V 633, 624
[Weinberger, Springel &
PakmorWeinberger et al.2020]Arepo2
Weinberger R., Springel V., Pakmor R., 2020, @doi []
10.3847/1538-4365/ab908c, https://ui.adsabs.harvard.edu/abs/2020ApJS..248...32W 248, 32
[Yoshida, Omukai & HernquistYoshida
et al.2008]Yoshida+2008
Yoshida N., Omukai K., Hernquist L., 2008, @doi [Science]
10.1126/science.1160259, https://ui.adsabs.harvard.edu/abs/2008Sci...321..669Y 321, 669
[Zwick, Mayer, Haemmerlé &
KlessenZwick et al.2023]2023MNRAS.518.2076Z
Zwick L., Mayer L., Haemmerlé L., Klessen R. S., 2023, @doi
[] 10.1093/mnras/stac3204, https://ui.adsabs.harvard.edu/abs/2023MNRAS.518.2076Z 518, 2076
§ LUMINOSITY ESTIMATE
Figure <ref> show the luminosity L_2 (top) estimated using Equation <ref> and the resulting blackbody temperature T_ BB (bottom), as a function of time measured since collision for all our models.
|
http://arxiv.org/abs/2307.03908v1 | 20230708054722 | Incorporating Deep Q -- Network with Multiclass Classification Algorithms | [
"Noopur Zambare",
"Ravindranath Sawane"
] | cs.LG | [
"cs.LG"
] |
1 Indian Institute of Technology, Jodhpur, India
2 Western University, Ontario, Canada
In this study, we explore how Deep Q-Network (DQN) might improve the functionality of multiclass classification algorithms. We will use a benchmark dataset from Kaggle to create a framework incorporating DQN with existing supervised multiclass classification algorithms. The findings of this study will bring insight into how deep reinforcement learning strategies may be used to increase multiclass classification accuracy. They have been used in a number of fields, including image recognition, natural language processing, and bioinformatics. This study is focused on the prediction of financial distress in companies in addition to the wider application of Deep Q-Network in multiclass classification. Identifying businesses that are likely to experience financial distress is a crucial task in the fields of finance and risk management. Whenever a business experiences serious challenges keeping its operations going and meeting its financial responsibilities, it is said to be in financial distress. It commonly happens when a company has a sharp and sustained recession in profitability, cash flow issues, or an unsustainable level of debt.
DQN (Deep Q - Network)Deep Reinforcement Learning Financial Distress Multiclass Classification, Decision Tree Classifier Naive Bayes, Random Forest Classifier
§ INTRODUCTION
§.§ Background
The goal of Reinforcement Learning (RL), a branch of machine learning, is to train agents how to make decisions sequentially in an environment that optimises a reward signal. By interacting with the environment, getting feedback in the form of rewards or penalties, and adapting their behaviour in response, RL algorithms learn through trial and error. The Deep Q-Network (DQN) is a deep reinforcement learning method that combines the Q-learning algorithm and the capability of deep neural networks.
Financial distress refers to a state in which a company faces considerable challenges in meeting its financial obligations. Early indications of financial problems might help proactive actions like restructuring, obtaining more finance, or putting cost-cutting measures into place. Machine learning has made breakthroughs in recent years when it comes to applying reinforcement learning algorithms, particularly DQN, to different problem domains. We use a wide range of supervised learning algorithms, such as Decision Tree, Random Forest Classifier, and Naive Bayes, to create the DQN framework. The DQN ensemble's underlying models are represented by these algorithms. We intend to study the potential advantages and performance enhancements that can be achieved by combining supervised learning with the reinforcement learning approach of DQN using supervised learning algorithms as the foundation models. The use of DQN for multiclass classification to forecast financial difficulties in businesses is explored in this study.
§.§ Problem Statement
The goal of this paper is to investigate the use of Deep Q-Network in multiclass classification problems. We intend to adapt and use DQN's skills for resolving multiclass classification issues despite the fact that its typical application is mostly in the field of reinforcement learning. The subject of interest is the application of DQN for multiclass classification to predict financial distress in businesses.
By effectively resolving this problem, we want to open up the possibility of applying reinforcement learning principles to a variety of classification problems.
§ STATE OF THE ART
In DQN, our goal is to train an action-value function Q(s, a) that calculates the predicted cumulative reward for performing action 'a' in state 's'. The Bellman Equation or Q-Learning update equation is defined as follows:
Q(s,a) = (1 - ϵ) Q(s,a) + α[ r + γ max Q(s^', a^') - Q(s,a) ]
where, Q(s, a) = Current estimate of the predicted future benefits of action 'a' in state 's' ϵ = exploration-exploitation trade-off α = learning rate r = immediate received reward γ = discount factor
A variation of the Q-learning process called Deep Q-Network makes use of neural networks to make approximations of the Q-value function. The expected reward for performing a specific action in a given condition is provided by the Q-value function. The Q-value function is represented as a table in conventional Q-learning but as a neural network in DQN.
Experience replay and a technique called fixed Q-targets are both used by the DQN algorithm to stabilise the learning process. Experience replay involves sampling small batches of experiences for training and storing observed transitions (s, a, r, and s') in a replay buffer. Using a target network with set parameters for a predetermined number of iterations before updating it with the parameters of the online network is recognised as leveraging fixed Q-targets.
§ METHODOLOGY
§.§ Dataset
The study involves the use of a dataset gathered by Kaggle that includes different financial parameters and company characteristics. The dataset, which is accessible in CSV format, includes statistics on the company's performance as well as relevant contextual information. Using methods like label encoding, a preprocessing step is implemented to handle missing data, normalise features, and transform categorical variables. Then, training and testing sets are created from the preprocessed dataset.
§.§ Baseline Multiclass Classification Algorithms
§.§.§ Decision Tree
In this algorithm, the space of features is recursively divided according to a set of criteria in order to generate a decision tree. Information gain or Gini impurity is the most widely used criterion. They can handle categorical and numerical features, as well as non-linear relationships, and they can capture both. Decision trees, show a tendency to overfit the training set if they are not appropriately regularised or pruned. Overfitting can be reduced using strategies like pruning, establishing a minimum number of samples needed to split a node, or using ensemble methods.
§.§.§ Random Forest Classifier
An ensemble technique called the Random Forest Classifier combines several decision trees to produce predictions. A random subset of features is taken into account at each split of each tree, which is trained on a bootstrap sample of the training data. By combining the predictions of various trees, either through majority voting or averaging, the final prediction is obtained.
§.§.§ Naive Bayes
The Naive Bayes algorithm is a probabilistic classifier that relies on the Bayes theorem and makes the assumption that features are independent of the class. Given the input features, it calculates the probabilities of each class and chooses the class with the highest probability as the prediction.
§.§ Multiclass Classification Algorithms with DQN Integration
§.§.§ Defining Agent
The DQN class is used to represent the agent. Based on the input features given, it acts as the decision-making entity that learns to categorise the different levels of financial distress. The agent employs a method akin to the DQN, using a group of Decision Tree Classifier, Random Forest Classifier and Naive Bayes models as the Q-network.
§.§.§ Defining Environment
In this case, the environment is the classification problem itself, which involves determining the levels of economic distress based on the given input features. The agent receives rewards from the environment as feedback, which helps it improve its classification performance.
§.§.§ State Representation
The input features that were utilised to train the agent define the state representation. In this instance, the features Company, Time, x1, x2, x3, and x4 serve as representations of the state. These features are taken out of the data frame and sent to the classification agent as input.
§.§.§ Setting Reward Function
The act() method of the DQN class contains a definition of the reward. If any of the true class labels in the y variable match the predicted action (class label), the agent is rewarded with a value of 1. If not, it is rewarded with -1. The goal of the reward system is to encourage the agent to forecast classes correctly.
§.§.§ Selection of Action
The action selection method makes sure that the model chooses the best class label depending on the situation at hand and previously learnt information.
The class labels that are available in this situation make up the action space. To determine the class for a particular input, the agent will select an action (class label) from this collection. The number of classes in the classification problem and the size of the action space are related.
§.§.§ Training
Iterating through episodes and the stages in each episode are both parts of the training process. In agreement with an epsilon-greedy exploration-exploitation strategy, the agent chooses a course of action (class label). In accordance with the accuracy of its forecast, it is rewarded, and the ensemble of decision tree models is updated. For the specified number of episodes, training is ongoing.
§.§.§ Evaluation
By comparing the predicted labels with the actual labels using the test data, it is feasible to assess how accurate the agent's predictions were. The calculated accuracy of the base model and the accuracy of the DQN-based agent after training are compared.
§.§ Evaluation Metrics
The various metrics involved in analysis are accuracy, recall score and precision score. However, the performance of models was also analyzed using a confusion matrix.
§ RESULTS AND ANALYSIS
§.§ Comparison with Baseline Algorithms
On the chosen benchmark datasets, the performance of the proposed framework, which incorporates Deep Q-Network with multiclass classification algorithms, is compared with that of the baseline algorithms.
4|c|Comparitive Analysis
Model Accuracy Recall Precision
Decision Tree 0.98 0.50 0.50
With DQN 0.33 0.28 0.34
Random Forest 0.99 0.50 0.50
With DQN 0.32 0.29 0.34
Naive Bayes 0.99 0.75 0.67
With DQN 0.31 0.28 0.34
§.§ Analysis of Computational Efficiency
§.§.§ Decision Tree
* In comparison to the DQN-based model, the baseline model (Decision Tree Classifier) often takes less time to train. As they do not require iterative optimisation, decision trees can be trained quickly because they directly learn the decision boundaries and feature splits. In comparison, the DQN-based model employs a more computationally costly training procedure that requires repeatedly training an ensemble of Decision Tree Classifiers.
* Compared to the DQN-based approach, the baseline model often uses a smaller amount of memory. To store the separate models, the ensemble of Decision Tree Classifiers utilised in the DQN model needs more memory. The baseline approach, in comparison, only has to keep one decision tree, which requires less memory.
* Comparing the baseline model to the DQN-based approach, the baseline model is certainly more effective in terms of computation.
§.§.§ Random Forest Classifier
* In comparison to the DQN-based model, the baseline model (Random Forest Classifier) often requires less training time. Due to its ability to create numerous decision trees at once while using parallel processing, random forests can be trained effectively. A random subset of characteristics and data samples is used to train each decision tree individually. The DQN-based model, on the other hand, requires several pieces of training for an ensemble of Random Forest Classifiers, which can be computationally more taxing.
* Usually, the baseline model uses less memory than the DQN-based model. The DQN model's ensemble of Random Forest Classifiers requires extra memory to store each individual model.
* In conclusion, compared to the DQN-based model, the baseline model (Random Forest Classifier) is anticipated to be computationally more efficient in terms of training time, inference time, and memory use.
§.§.§ Naive Bayes
* Due to its simplicity, the basic framework (Gaussian Naive Bayes) is computationally effective for both training and prediction. While the DQN model similarly employs Naive Bayes classifiers, the ensemble technique adds more complexity and increases processing overhead when compared to the base model.
* A single Naive Bayes classifier, along with its associated parameters and probability distributions, must be stored in memory by the base model (Gaussian Naive Bayes). In order to store an ensemble of Naive Bayes classifiers, which consists of various models with their unique parameters and probability distributions, the DQN model needs memory.
§ DISCUSSION
§.§ Advantages
Multiclass classification methods that incorporate Deep Q-Network (DQN) have various benefits and provide special capabilities to the task. Benefits involve :
* Handling Complex Decision-Making
* Adaptability to Dynamic Environments
* Handling Imbalanced Datasets
* Real-time Classification
§.§ Limitations
§.§.§ Large Memory Requirements
Especially when employing experience replay, which includes storing and sampling from a significant replay buffer, DQN often needs a lot of RAM.
§.§.§ Curse of Dimensionality
Finding the most effective measures and achieving efficient convergence can be more difficult when the DQN training and learning process is impacted by the curse of dimensionality. Consequently, DQN's ability to do multiclass classification well may be constrained by its ability to handle significant feature spaces.
§.§.§ Limited Generalization to New Classes
It often acquires policies unique to the classes found in the training set. They are efficient at handling well-known classes, but they have a limited ability to generalise to unfamiliar or new classes. In dynamic classification contexts where new classes continually emerge, the technique is less adaptive since incorporating new classes into the model often requires retraining or considerable fine-tuning.
§.§ Future scope
Future prospects are promising when Deep Q-Network is incorporated into multiclass classification algorithms. Such as Transfer Learning and Knowledge Transfer, Real-time Classification, Hierarchical Multiclass Classification, Adaptive Learning and Dynamic Feature Selection, and many others.
§ CONCLUSION
The study uses multiclass classification to show the significance of using DQN for financial distress prediction in businesses. The study's findings may help businesses, investors, and financial institutions make informed decisions and take preventive action to reduce the risks associated with the financial crisis. Possible reasons for less accuracy by the DQN model than the base model :
* The classifier for the base model is trained directly on the labelled training data using a traditional supervised learning methodology. In a single step, it learns the probability distributions and class boundaries from the data. While the DQN model iteratively changes its ensemble of classifiers based on the rewards it receives from the environment, it is trained using a reinforcement learning methodology. This recurrent training procedure may generate noise and instability, resulting in less accurate convergence.
* While lowering bias and variance can help ensembles perform better, they also add to the complexity and risk of inconsistencies across the various models. Lower accuracy may be the consequence if the ensemble is unable to fully capture the underlying patterns and relationships.
§ REFERENCES
* Melrose Roderick, James MacGlashan, Stefanie Tellex "Implementing the Deep Q-Network" arXiv:1711.07478v1 [cs.LG] 20 Nov 2017
* Jianqing Fan, Zhaoran Wang, Yuchen Xie, Zhuoran Yang Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:486-489, 2020
* Z. Gao, Y. Gao, Y. Hu, Z. Jiang and J. Su, "Application of Deep Q-Network in Portfolio Management," 2020 5th IEEE International Conference on Big Data Analytics (ICBDA), Xiamen, China, 2020, pp. 268-275, doi: 10.1109/ICBDA49040.2020.9101333.
* Mills P. Solving for multi-class: a survey and synthesis. arXiv preprint arXiv:1809.05929. 2018 Sep 16.
* Wen G, Wu K. Building decision tree for imbalanced classification via deep reinforcement learning. Asian Conference on Machine Learning 2021 Nov 28 (pp. 1645-1659). PMLR.
* Fu Q, Li K, Chen J, Wang J, Lu Y, Wang Y. Building energy consumption prediction using a deep-forest-based DQN method. Buildings. 2022 Jan 27;12(2):131.
* Reddy EM, Gurrala A, Hasitha VB, Kumar KV. Introduction to Naive Bayes and a Review on Its Subtypes with Applications. Bayesian Reason. Gaussian Process. Mach. Learn. Appl. 2022 Apr 19:1-4.
* Whitaker RB. The early stages of financial distress. Journal of Economics and Finance. 1999 Jun;23(2):123-32.
* Lau AH. A five-state financial distress prediction model. Journal of Accounting Research. 1987 Apr 1:127-38.
* Mselmi N, Lahiani A, Hamza T. Financial distress prediction: The case of French small and medium-sized firms. International Review of Financial Analysis. 2017 Mar 1;50:67-80.
* Grandini M, Bagli E, Visani G. Metrics for multi-class classification: an overview. arXiv preprint arXiv:2008.05756. 2020 Aug 13.
* Toupas P, Chamou D, Giannoutakis KM, Drosou A, Tzovaras D. An intrusion detection system for multi-class classification based on deep neural networks. In2019 18th IEEE International Conference on machine learning and Applications (ICMLA) 2019 Dec 16 (pp. 1253-1258). IEEE.
* Li J, Liu Y, Yin R, Zhang H, Ding L, Wang W. Multi-class learning: From theory to algorithm. Advances in Neural Information Processing Systems. 2018;31.
|
http://arxiv.org/abs/2307.04369v1 | 20230710065440 | Exact generalized Turán number for $K_3$ versus suspension of $P_4$ | [
"Sayan Mukherjee"
] | math.CO | [
"math.CO",
"05C35"
] |
New results on the dynamics of critical collapse
Cheng-Gang Shao2
August 12, 2023
================================================
Let P_4 denote the path graph on 4 vertices.
The suspension of P_4, denoted by P_4, is the graph obtained via adding an extra vertex and joining it to all four vertices of P_4.
In this note, we demonstrate that for n≥ 8, the maximum number of triangles in any n-vertex graph not containing P_4 is ⌊ n^2/8⌋.
Our method uses simple induction along with computer programming to prove a base case of the induction hypothesis.
Keywords: generalized Turán problem, suspension of a graph, computer programming.
2020 Mathematics Subject Classification: 05C35.
§ INTRODUCTION
The generalized Turán number (n, T, H) is defined as the maximum number of copies of T in an n-vertex graph not containing H as a (not necessarily induced) subgraph.
When T=K_2, this is the Turán number (n,H) of the graph.
The first systematic study of (n, T, H) for T≠ K_2 was carried out by Alon and Shikhelman <cit.>.
In more recent years, several researchers have studied the asymptotic behavior of (n, K_3, H) for the case T=K_3 (see, for example <cit.>).
It is known that when χ(H)>3, (n,K_3,H)∼χ(H)-13/(χ(H)-1)^2· n^2, where χ(H) denotes the chromatic number of H <cit.>.
Alon and Shikhelman <cit.> extensively study the case when χ(H)=2.
Mubayi and the author <cit.> initiated the study of (n, K_3, H) for a simple family of graphs H with χ(H)=3.
For any graph G, they denoted the suspension G as the graph obtained from G by adding a new vertex v and joining it with all vertices of G.
They proceeded to analyze the asymptotic behavior of (n,K_3,G) for different bipartite graphs G.
One of the several bipartite graphs they consider is the path P_4 on four vertices. It was shown that for any n≥ 4,
n^2/8-O(1)≤(n, K_3, P_4) < n^2/8+3n.
An exact result for sufficiently large n was given by Gerbner <cit.> using the technique of progressive induction. In particular, they prove that for a number K≤ 1575 and n≥ 525+4K,
(n,K_3,P_4) = ⌊ n^2/8⌋.
They mention that a proof of the upper bound of (<ref>) for n=8,9,10,11 together with induction would suffice to prove (<ref>) for every n≥ 8.
In this note, we leverage this idea to determine the exact value of (n, K_3, P_4) for every n≥ 4, thus closing the gap in the literature for this extremal problem.
For n≥ 8, (n, K_3, P_4) = ⌊ n^2/8⌋. For n=4,5,6,7 the values of (n, K_3,P_4) are 4,4,5,8 respectively.
The lower bound constructions for Theorem <ref> are different for the cases n∈{4,5,6,7} and n≥ 8.
Figure <ref> illustrates graphs on n vertices for n∈{4,5,6,7} that achieve the maximum number of triangles.
In fact, we shall see later in Section <ref> that these constructions are unique up to isomorphism.
The general lower bound construction considered in <cit.> (for n≥ 8) was the complete bipartite graph K_⌊ n/2⌋, ⌈ n/2⌉ with a matching in any of the even parts.
A short case analysis shows that the total number of triangles in these graphs is given by ⌊ n^2/8⌋, hence proving the lower bound in Theorem <ref> for general n.
Thus, the main goal of this manuscript is to prove that these lower bounds on (n,K_3,P_4) are tight.
This work is organized as follows.
We present some preliminaries in Section <ref>.
Then, we show the upper bound of Theorem <ref> for n≥ 5 in Section <ref>.
Finally, we make some concluding remarks regarding uniqueness of the lower bound constructions in Section <ref>.
§ PRELIMINARIES
Throughout the rest of this paper, we assume without loss of generality that all graphs are edge-minimal.
This implies that every edge of the graphs considered must lie in a triangle, as we can simply delete edges that do not help forming a triangle.
We also assume that the vertex set of any n-vertex graph in the rest of this section is {0,…,n-1}, and abuse notation to represent a K_3 on vertex subset {a,b,c} as simply abc.
Let n(G), e(G) and t(G) denote the number of vertices, edges and triangles in G, respectively.
Now we recall some definitions and state a two important lemmas from <cit.> and <cit.> which are instrumental in our proof.
For a graph G, two edges e and e' are said to be triangle-connected if there is a sequence of triangles {T_1,…,T_k} of G such that e∈ T_1, e'∈ T_k, and T_i and T_i+1 share a common edge for every 1≤ i ≤ k-1.
A subgraph H⊆ G is triangle-connected if e and e' are triangle-connected for every edges e and e' of H.
A subgraph H⊆ G is a triangle block (or simply a block) if it is edge-maximally triangle-connected.
By definition, the triangle blocks of any graph G are edge-disjoint.
Let B_s denote the book graph on (s+2) vertices, consisting of s triangles all sharing a common edge. Let this common edge be called the base of the B_s. The following lemma characterizes the triangle blocks of any P_4-free graph G.
Every triangle block of a P_4-free graph G is isomorphic to a K_4 or a B_s for some s≥ 1.
Let H⊆ G be an arbitrary triangle block.
If H contains only one or two triangles, it is isomorphic to B_1 or B_2.
Suppose H contains at least three triangles.
Let two of them be abx_1 and abx_2 (see Figure <ref>).
If another triangle is of the form ax_1y for some y∈ V(H), then there are two possible cases. If y≠ x_2, then N_H(a) contains the 4-path x_2bx_1y, a contradiction. Otherwise if y=x_2, then the vertices a,b,x_1,x_2 create a K_4, and this K_4 is a triangle block by itself.
Similarly, if a triangle contained any of the edges bx_1, ax_2, bx_2, we would end up with a K_4-block, and this block cannot be extended any further.
Therefore all triangles in H would intersect the edge ab, implying H≅ B_s for some s≥ 1.
Suppose G is an n-vertex P_4-free graph containing no K_4. Then, we have
t(G)≤⌊ n^2/8⌋.
By Lemma <ref>, all triangle blocks of G are isomorphic to B_s for some s≥ 1.
Let G' be obtained from G by deleting the base edges of each of the books (if s=1, delete any arbitrary edge).
As each triangle of G contains two distinct edges from G', we have t(G)=e(G')/2.
By Mantel's theorem, e(G')≤⌊ n^2/4⌋, implying t(G)≤1/2⌊ n^2/4⌋, i.e. t(G)≤⌊ n^2/8⌋.
§ UPPER BOUNDS
In order to prove that (n,K_3,P_4)≤ K for some fixed n and K, we need to show that any n-vertex graph containing at least K+1 triangles contains a copy of P_4.
§.§ The cases 5≤ n≤ 8: brute force
While a case-by-case analysis is tractable by hand for n=5 for example, we quickly run into several possible configurations while trying to prove (8,K_3,P_4)=8.
This is where we turn to a computer-generated check.
For example, to prove that all 8-vertex graphs with more than 9 triangles is P_4-free, we can assume that 012 and 013 are two triangles in some 8-vertex graph G containing 9 triangles.
Then triangles that have an edge from the set {02, 03, 12, 13} and have a node from {4,5,6,7} are excluded from G since any of these patterns form a P_4.
This excludes 16 triangles.
Hence the plausible triangles that G may contain other than 012 and 013 are 83 - 18 = 38 in number.
We generate 387≈ 1.26× 10^7 possible graphs, filter out the ones that have exactly 9 triangles, and check for P_4's in each of them.
Our program is available at the Github repository in <cit.>.
We run |triangle_count.ipynb|.
Our computation shows that ex(n,K_3,P_4)=4,5,8,8 for n=5,6,7,8, respectively.
The total computation time required for (n,t)=(8,9) on 7 threads of an laptop processor running at 1.80GHz was around 18 minutes.
§.§ The cases 9≤ n ≤ 11: identifying K_4
The main idea behind these cases is to follow the steps of the proof in <cit.>, Section 5.2.
Suppose (n,t)∈{(9,11), (10,13), (11, 16)}, and G is an (edge-minimal) n-vertex graph with t triangles.
Then G must contain a P_4.
For the sake of contradiction, assume that G was P_4-free.
If G was also K_4-free, then by Lemma <ref>, t(G)≤⌊ n^2/8⌋ = 10, 12, 15 for n=9,10,11, contradicting our initial assumption on t(G).
Therefore G must contain a K_4.
Let this K_4 be induced by vertex subset S={u_0,u_1,u_2,u_3}⊂ V(G).
Define X_i := N(u_i) - S for 0≤ i ≤ 3.
As G[S] is a triangle block, X_i∩ X_j=∅ for every i≠ j.
Further, ∑_i=0^3|X_i|≤ n-4.
Without loss of generality assume |X_0|≤⋯≤ |X_3|.
Now we consider each case separately.
* Case 1. (n,t)=(9,11):
In this case, ∑_i=0^3|X_i|≤ 5.
If |X_1|>0, by edge-minimality we would have |X_1|≥ 2, implying |X_1|+|X_2|+|X_3|≥ 6, a contradiction.
Thus, |X_0| = |X_1| = 0, and by a similar argument, |X_2|≤ 2.
This means the vertex u_2 lies in at most one triangle outside of G[S].
Let G' be obtained by deleting {u_0,u_1,u_2} from G. Clearly n(G')=6 and t(G')≥ t(G)-5 = 6.
As (6,K_3,P_4)=5 by the discussion in Section <ref>, G' has a P_4, a contradiction.
* Case 2. (n,t)=(10,13):
Here, ∑_i=0^3 |X_i|≤ 6.
By a similar analysis as before, we can infer that |X_0|=0 and |X_1|≤ 2.
If |X_1|=0, we could consider G'=G-{u_0,u_1}, which would have n(G')=8 and t(G')=13-4=9, which would lead us to a P_4 since (8,K_3,P_4)=8 by the calculation in Section <ref>.
Thus, we have |X_0|=0, |X_1|=2, and hence |X_2|=|X_3|=2.
Now, if we consider G”=G-S, we have n(G”)=6 and t(G”) = 13-4-3=6, again implying that G” has a P_4.
* Case 3. (n,t)=(11,16):
For this pair of (n,t), we have ∑_i=0^3|X_i|≤ 7, implying |X_0|=0 again.
Since u_0 lies in exactly three triangles of G[S], G'=G-{u_0} has n(G')=10 and t(G')=13, leading us to the previous case.
In either of the three cases, we obtain a contradiction, finishing the proof for these cases.
§.§ The case n≥ 12: identifying K_4
Now that we have proved (n,K_3,P_4) = ⌊ n^2/8⌋ for 8≤ n ≤ 11, we are now ready to handle the general case using induction on n.
Our proof follows the idea of <cit.> with a more careful analysis to obtain the desired bound.
Let us assume that (k,K_3,P_4)=⌊ k^2/8⌋ for all 8≤ k ≤ n-1.
We note that a simple case analysis leads to
⌊ n^2/8⌋ - ⌊ (n-1)^2/8⌋ ≥⌊ n/4⌋
⌊ n^2/8⌋ - ⌊ (n-4)^2/8⌋ = n-2.
For the sake of contradiction, suppose G is an n-vertex P_4-free graph with t(G)≥⌊ n^2/8⌋ +1.
For a subset U⊂ V(G), let us denote by t(U) the number of triangles containing at least one vertex from U.
By (<ref>), we may assume that
|U|=1 t(U) ≥⌊ n/4⌋ +1,
|U|=4 t(U) ≥ n-1.
Now, notice that by Lemma <ref>, G must contain a K_4.
As in the previous section, let S={u_0,u_1,u_2,u_3} induce this K_4, and denote X_i=N(u_i)-S for 0≤ i ≤ 3. Again, |X_i∩ X_j|=∅ for every i≠ j.
Observe that t(S)= ∑_i=0^3 e(X_i)+4, and so by (<ref>),
∑_i=0^3 e(X_i)≥ n-5.
On the other hand, since each X_i is P_4-free, we have ∑_i=0^3 e(X_i)≤∑_i=0^3 |X_i| ≤ n-4.
Hence,
∑_i=0^3 e(X_i)∈{n-5, n-4}
This implies that e(X_i)=|X_i| for at least three u_i∈ S.
Assume that e(X_i)=|X_i| for 0≤ i ≤ 2 and e(X_3)∈{|X_3|-1, |X_3|}.
This also means that G[X_i] are vertex-disjoint unions of triangles for 0≤ i≤ 2,
and X_3 is a union of triangles and a star on r vertices for some r≥ 0.
Further, (<ref>) gives us the bound
|X_i|≥⌊ n/4⌋ -2 .
We now continue with a more detailed analysis of the neighborhoods of vertices in G.
In what follows, let x_i denote the size of X_i.
For a subset A⊂ V(G), let 𝒯(A) denote the set of triangles in G[A].
We now consider two cases.
Case 1: ∑_i=0^4x_i=n-5.
In this case, note that since ∑_i=0^3 e(X_i) = n-5, we have e(X_3)=x_3.
Thus, the subgraphs G[X_i] are all disjoint unions of triangles, and there is exactly one vertex y in V(G)-⋃_i X_i ∪ S,
and thus 3| n-5, implying n≡ 2 3.
Moreover, (<ref>) implies x_i≥ 3, and hence n≥ 17.
Now, observe that for G'=G-{y},
∑_v∈ V(G) v = ∑_i=0^3∑_vwz∈𝒯(X_i)(_G'v+_G'w+_G'z) + ∑_v∈ S v + 2 y.
We proceed by upper bounding each term of (<ref>) separately.
* Let vwz∈𝒯(X_0).
For any j≠ 0, as N(v)-X_0-S-{y} cannot contain two adjacent vertices from the same X_j, v can only be adjacent to at most one vertex from each triangle of X_j.
Finally, v is adjacent to exactly three nodes from X_0∪ S, leading to
_G' v + _G' w + _G' z ≤ 3(x_1/3 + x_2/3 + x_3/3) + 9 = (x_1+x_2+x_3)+9.
By repeating the same argument over all x_i/3 triangles from 𝒯(X_i), we have
∑_vwz∈𝒯(X_i)(_G'v+_G'w+_G'z) ≤x_i/3∑_j≠ ix_j + 3x_i.
* As y is not adjacent to any vertex of S, we have
∑_v∈ S v = (x_0+x_1+x_2+x_3) + 12 = n+7.
* For each i, N(y)∩ X_i has at most x_i/3 vertices, as otherwise by the pigeonhole principle we would have v,w∈ N(y)∩ X_i that are adjacent, leading to a triangle yvw sharing an edge with the K_4 containing u_i, v and w.
Further, y does not have a neighbor in S.
Thus,
y ≤x_0+x_1+x_2+x_3/3 = n-5/3.
Putting these inequalities together and noting that 3t(G)≤∑_v∈ V(G) v, (<ref>) gives us
3⌊ n^2/8⌋ + 3 ≤ 3t(G) ≤2/3∑_i<jx_ix_j + 3(x_0+x_1+x_2+x_3)+(n+7) + 2/3(n-5)
= 1/3(n-5)^2 - 1/3∑_i=0^3x_i^2 + 14n-34/3.
On the other hand, we note that by the Cauchy-Schwarz inequality, ∑_i=0^3x_i^2 ≥1/4(n-5)^2.
Therefore,
3⌊ n^2/8⌋ + 3 ≤1/4(n-5)^2+14n-34/3 = 1/12(3 n^2 + 26 n - 61),
A contradiction to n≥ 17. This completes the proof in this case.
▪
Case 2: ∑_i=0^4x_i=n-4.
In this case, recall that G[X_i] are disjoint unions of triangles for 0≤ i≤ 2, and X_3 is a union of triangles and a star on r≥ 0 vertices.
Let us denote this star as S^∗ = {c,ℓ_1,…, ℓ_r-1} where c is the center and ℓ_j the leaves.
We now continue with the exact same analysis of the neighborhoods of vertices in G as in the previous case.
For a subset A⊂ V(G), let 𝒯(A) denote the set of triangles in G[A].
First, we note that
∑_v∈ V(G) v = ∑_i=0^2∑_vwz∈𝒯(X_i)( v+ w+ z) + ∑_v∈ X_3 v + ∑_v∈ S v.
Let us now upper bound each term in (<ref>) separately.
* Let vwz∈𝒯(X_0).
Clearly N(v)-X_0-S cannot contain two adjacent vertices from the same X_j, j≠ 0.
Therefore, v can only be adjacent with at most one vertex from each triangle of X_j for j≠ 0.
Moreover, N(v)∩ S^∗, N(w)∩ S^∗ and N(z)∩ S^∗ are disjoint, implying
v + w + z ≤ 3(x_1/3 + x_2/3 + x_3-r/3) + r + 9 = (x_1+x_2+x_3) + 9.
Similar inequalities hold for each of the x_i/3 triangles in 𝒯(X_i), 0≤ i≤ 2.
In particular, we have
∑_vwz∈𝒯(X_i)( v+ w+ z) ≤x_i/3∑_j≠ ix_j + 3x_i.
* Let v∈ X_3.
Then, N(v)-X_3-S can have at most one vertex from each triangle of X_i.
Thus,
v ≤{[ 1/3(x_0+x_1+x_2) + 3, v∉S^∗,; 1/3(x_0+x_1+x_2) + r, v = c,; 1/3(x_0+x_1+x_2) + 2, v ∈ S^∗-{c}. ].
Thus, if r≥ 1,
∑_v∈ X_3 v ≤x_3(x_0+x_1+x_2)/3 + 3(x_3-r) + r + 2(r-1) = x_3(x_0+x_1+x_2)/3 + 3x_3 - 2,
and if r=0,
∑_v∈ X_3 v ≤x_3(x_0+x_1+x_2)/3 + 3x_3.
We use the latter inequality as it holds for any value of r.
* Finally, we have
∑_v∈ S v = (x_0+x_1+x_2+x_3)+12 = n+8.
Therefore, (<ref>) along with 3t(G)≤∑_v∈ V(G) v, gives us
3t(G) ≤2/3∑_i<jx_ix_j + 3(x_0+x_1+x_2+x_3) + n + 8.
= 1/3(n-4)^2 - 1/3∑_i=0^3 x_i^2 + 4n - 4
Observe that by Cauchy-Schwarz, ∑_i=0^3x_i^2≥1/4(n-4)^2.
Hence, (<ref>) implies,
3t(G)≤1/4(n-4)^2 + 4n-4 t(G)≤1/12 n(n+8).
By t(G)≥⌊ n^2/8⌋ + 1, this implies n≤ 14.
Note that as n-4 = ∑_i=0^3x_i ≥ 9+x_3, we would have x_3≤ 1.
By (<ref>), this would mean x_3 = 1.
However, this contradicts edge-minimality of G, as the edge between u_3 and the only vertex of X_3 would not be incident to any triangle in G, again leading to a contradiction in this case.
▪
This completes the proof of the induction step, implying (n,K_3,P_4)≤⌊ n^2/8⌋ for all n≥ 12.
§ CONCLUDING REMARKS: UNIQUENESS
For n≥ 8, one may ask whether the lower bound construction of K_⌊ n/2⌋, ⌈ n/2⌉ with a matching in any of the even parts is unique or not.
In particular, our proof of Theorem <ref> implies that if the extremal construction contained a K_4, then ⌊ n^2/8⌋≤1/12n(n+8).
This implies n≤ 16, and indeed, setting x_i=3 for every i leads us to an equality case in Case 2.
Our proof therefore gives us the following construction from Figure <ref> for n=16 consisting entirely of K_4-blocks: consider a K_4 given by S={u_0,u_1,u_2,u_3}.
For 0≤ i≤ 3, let N(u_i)-S consist of the triangles b_io_ir_i, where the b_i's are colored blue, o_i's olive and r_i's red.
Suppose the blue, red and olive vertices each form a K_4 (the diagonal edges are omitted in Figure <ref> for clarity).
Clearly each vertex neighborhood has 6 edges, leading to a total of 16· 6/3=32 triangles, and hence this graph is a valid extremal configuration for n=16.
It seems many extremal constructions are possible for smaller values of n whenever divisibility and structural constraints are satisfied.
For example, when n=8, we enumerate in our repository <cit.> all extremal constructions with 8 triangles programmatically, and these constructions are comprised of either two edge-disjoint K_4's, or only books.
However, our proof of Theorem <ref> provides uniqueness of the extremal configuration for n≥ 17.
§ ACKNOWLEDGMENTS
This work was supported by the Center of Innovations for Sustainable Quantum AI (JST Grant Number JPMJPF2221).
plain
|
http://arxiv.org/abs/2307.04429v1 | 20230710090926 | Designing Novel Cognitive Diagnosis Models via Evolutionary Multi-Objective Neural Architecture Search | [
"Shangshang Yang",
"Haiping Ma",
"Cheng Zhen",
"Ye Tian",
"Limiao Zhang",
"Yaochu Jin",
"Xingyi Zhang"
] | cs.NE | [
"cs.NE",
"cs.AI",
"cs.LG"
] |
IEEE TRANSACTIONS ON XXXX, VOL. X, NO. X, MM YYYY
Yang et al.: No title
Designing Novel Cognitive Diagnosis Models via Evolutionary Multi-Objective Neural Architecture Search
Manuscript received –. This work was supported
in part by the National Key Research and Development Project under Grant 2018AAA0100105 and 2018AAA0100100,
in part by the National Natural Science Foundation of China under Grant 61822301, 61876123, 61906001, 62136008, U21A20512, and U1804262,
in part by the Anhui Provincial Natural Science Foundation under Grant 1808085J06 and 1908085QF271,
in part by the Collaborative Innovation Program of Universities in Anhui Province under Grant GXXT-2020-013,
and in part by the State Key Laboratory of Synthetical Automation for Process Industries under Grant PAL-N201805
(Corresponding authors: Limiao Zhang and Xingyi Zhang).
Shangshang Yang,
Haiping Ma,
Cheng Zhen,
Ye Tian,
Limiao Zhang,
Yaochu Jin, Fellow, IEEE,
and
Xingyi Zhang, Senior Member, IEEE
S. Yang and X. Zhang is with the Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education,
School of Artificial Intelligence, Anhui University, Hefei 230039, China (email: [email protected]; [email protected]).
C. Zhen, Y. Tian, and H. Ma are with the Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, Institutes of Physical Science and Information Technology, Anhui University, Hefei 230601, China (email: [email protected];[email protected];[email protected]).
L. Zhang is with Information Materials and Intelligent Sensing Laboratory of Anhui Province, Anhui University, Hefei, 230601, Anhui, China (email: [email protected]).
Y. Jin is with the Faculty of Technology, Bielefeld Unversity, Bielefeld 33619, Germany (email:[email protected]).
August 12, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Cognitive diagnosis plays a vital role in modern intelligent education platforms to reveal students' proficiency in knowledge concepts for subsequent adaptive tasks. However, due to the requirement of high model interpretability, existing manually designed cognitive diagnosis models hold too simple architectures to meet the demand of current intelligent education systems, where the bias of human design also limits the emergence of effective cognitive diagnosis models. In this paper, we propose to automatically design novel cognitive diagnosis models by evolutionary multi-objective neural architecture search (NAS). Specifically, we observe existing models can be represented by a general model handling three given types of inputs and thus first design an expressive search space for the NAS task in cognitive diagnosis. Then, we propose multi-objective genetic programming (MOGP) to explore the NAS task's search space by maximizing model performance and interpretability. In the MOGP design, each architecture is transformed into a tree architecture and encoded by a tree for easy optimization, and a tailored genetic operation based on four sub-genetic operations is devised to generate offspring effectively. Besides, an initialization strategy is also suggested to accelerate the convergence by evolving half of the population from existing models' variants. Experiments on two real-world datasets demonstrate that the cognitive diagnosis models searched by the proposed approach exhibit significantly better performance than existing models and also hold as good interpretability as human-designed models.
To make the search algorithm effectively generate offspring,
containing four sub-genetic operations is devised.
Besides, we also propose an initialization strategy to make half of the population evolve from existing models' variants
to accelerate the convergence.
Experiments on two real-world datasets demonstrate that
the cognitive diagnosis models searched by the proposed approach exhibit significantly better performance than existing models
and also hold good interpretability same as human-designed models.
and then propose the model interpretability objective to
formulate the NAS task as a multi-objective optimization problem (MOP) for
maintaining models' performance and interpretability simultaneously.
To tackle the formulated MOP well,
we propose a multi-objective evolutionary algorithm to explore the devised large search space,
where .
Specifically, a novel tree-based search space is first suggested to contain not only existing
models but also other architectures that humans have never seen as many as possible,
where each cognitive diagnosis model can be represented by a binary tree.
Then, an effective evolutionary algorithm is developed to explore the suggested novel search space, which considers both the performance and the interpretability of models in the search.
Cognitive diagnosis models, neural architecture search, evolutionary algorithm, multi-objective optimization, genetic programming, model interpretability.
§ INTRODUCTION
Cognitive diagnosis (CD) in the field of intelligent education <cit.> aims to
reveal students' proficiency in specific knowledge concepts
according to their historical response records of answering exercises
and the exercise-concept relational matrix (termed Q-matrix) <cit.>.
Fig. <ref> gives an illustrative example of CD,
where students {A,B} have practiced a series of exercises (i.e., {e_1,e_3,e_4} and {e_1,e_2,e_3}),
and got corresponding responses.
Based on the records and Q-matrix,
the students' knowledge proficiency in each concept can be obtained through CD.
By doing so, there is a wide range of intelligent education tasks,
such as personalized exercise recommendation <cit.> and targeted training <cit.>,
which can benefit from the students' diagnosis results.
With the rising demand for cognitive diagnosis models (CDMs) in online education platforms,
many researchers developed various CD approaches, which are generally grouped into two types.
The first genre of approaches is mainly proposed by researchers in educational psychology.
Their designed CDMs usually rely on simple handcrafted functions to model student-exercise interactions
and portray the student learning ability in a one-dimensional vector or other manners.
The representatives include Item Response Theory (IRT) <cit.>,
Deterministic Input, Noisy ’And’ gate (DINA) <cit.>, Multidimensional IRT (MIRT) <cit.>, and Matrix Factorization (MF) <cit.>.
Item Response Theory (IRT) <cit.> and Deterministic Inputs, Noisy-And gate (DINA) <cit.> are two pioneering approaches,
where IRT and DINA utilize a unidimensional continuous vector and a binary vector respectively to denote the student mastery
for predicting the probabilities of a student correctly answering exercises.
In addition, there are also some CD approaches improving above two CDMs or using other techniques,
such as MIRT <cit.> which extends IRT's unidimensional student and exercise latent traits into multidimensional space, and MF <cit.> based on the matrix factorization technique.
The second genre of ones <cit.>
is based on neural networks (NNs), where the student learning ability is portrayed by an inner latent vector.
The representatives contain Neural
Cognitive Diagnosis (NCD) <cit.>,
Prerequisite Attention model for Knowledge Proficiency diagnosis (PAKP) <cit.>, and Relation map driven Cognitive Diagnosis (RCD) <cit.>
.
As the critical components of CDMs,
diagnostic functions are mainly responsible for predicting student exercising scores by integrating three types of input vectors (i.e., student/exercise/concept-related input vector) in a highly interpretable manner.
To pursue high model interpretability,
existing CDMs' diagnostic functions are desired to hold simple architectures.
For example, IRT <cit.> and MF <cit.> utilize the simple logistic function and inner-product respectively as their diagnostic functions.
However, there exist two kinds of problems for these simple handcrafted diagnostic functions.
Firstly, simple diagnostic functions' architectures disable CDMs from modeling complex relationships between students and exercises well <cit.>, failing to meet the demands of modern education systems containing a large quantity of student exercising data.
Secondly, the design of existing diagnostic functions heavily relies on researchers' knowledge of both educational psychology and NNs <cit.>, which is labor-intensive and needs a lot of trial-and-error. And the human design bias may limit the emergence of novel diagnostic functions to some extent.
Furthermore, recent CD approaches <cit.> put less focus on the architecture design of diagnostic functions but on enhancing the input vectors for high performance,
which hinders the development of CDMs to some extent.
As the key components of CDMs,
diagnostic functions are mainly responsible for predicting student exercising scores by integrating three types of input vectors in a high interpretability manner,
including the student ability-related latent vector (i.e., student-related vector), the exercise-related latent vectors,
and the concept-related latent vectors.
Due to the aim of pursuing high model interpretability during the model design,
existing CDMs' diagnostic functions are desired to hold simple architectures.
For example, IRT <cit.> and MF <cit.> utilize the simple logistic function and the intuitive inner-product respectively
as their diagnostic functions to linearly combine student-related and exercise-related latent vectors.
However, there exist two aspects of problems for these simple manually-designed diagnostic functions.
On the one hand, the simple architectures of diagnostic functions disable CDMs from modeling the complex relationship between students and exercises well, failing to meet the demands of current intelligent education systems containing a large quantity of student exercising data.
On the other hand, the design of existing diagnostic function architectures heavily relies on researchers' domain knowledge in both educational psychology and NNs, where not only the design process is labor-intensive and needs a lot of trial-and-error
but also the human design bias may limit the emergence of novel diagnostic functions to some extent.
Although NCD <cit.> argues to find an automatic way to learn the complex interactions between students and exercises,
its simple diagnostic function architecture is still manually designed by summarizing architectures of previous CDMs.
Furthermore, recent CD approaches do not focus on the architecture design of diagnostic functions
but on enhancing the input vectors based on existing diagnostic function architectures for improving the prediction performance.
Therefore, it is necessary to design more effective novel diagnostic function architectures to meet
the demands of current intelligent education systems.
For the above reasons, this paper aims to develop novel CDMs by automatically designing effective diagnostic function architectures.
Since Zoph and Le <cit.> proposed to search neural architectures for image tasks,
neural architecture search (NAS) <cit.> has been widely applied to many research fields and achieved significant success <cit.>.
Among various search strategies of NAS, including reinforcement learning <cit.> and gradient optimization <cit.>,
evolutionary algorithms (EAs), especially multi-objective evolutionary algorithms (MOEAs), have shown a more powerful ability to search <cit.>.
Moreover, compared to other NAS approaches, MOEA-based NAS approaches <cit.> are superior
in getting out of local optima and presenting trade-offs among multiple objectives
, where many architectures holding different attributes can be found in a single run.
The representative approaches include Neural Architecture Search using Multi-Objective
Genetic Algorithm (NSGA-Net) <cit.>, and Lamarckian Evolutionary algorithm for Multi-Objective Neural Architecture DEsign (LEMONADE) <cit.>.
However, existing NAS approaches cannot be applied to CD due to the difference in search space between CD and other tasks, and different search space generally needs different MOEAs <cit.>,
whose representations and genetic operations are task-tailored <cit.>,
further hindering them from being applied to CD.
Therefore, this paper proposes an evolutionary multi-objective NAS to design novel CDMs (termed EMO-NAS-CD),
where an expressive search space is first devised and
multi-objective genetic programming (MOGP) is employed to explore the search space to develop high-performance CDMs with good interpretability.
Specifically, our main contributions are as follows:
* This paper is the first NAS work to design CDMs, which explores the search space design and search strategy design of NAS.
Regarding the search space, we first design an expressive search space for the NAS task of CD (NAS-CD) by summarizing existing diagnostic function architectures.
Within, each candidate architecture is denoted by a general model, which takes at most three given types of input vectors as input nodes.
Then, regarding the search strategy, we propose MOGP to explore the search space by solving a bi-objective problem of NAS-CD, which maximizes the objectives of model performance and interpretability simultaneously.
To make the searched highly interpretable,
we propose to optimizethe model performance and model interpretability simultaneously and thus formulate the NAS-CD task as a bi-objective optimization problem, where the interpretability of an architecture is intuitively characterized with its depth, breadth, and its contained computation node number.
* In the MOGP design,
we first transform architectures under the search space into tree architectures
and then encode them by trees for easy optimization, which can avoid the optimization difficulties of vector-based encoding (e.g., the problem of variable-length encoding).
Based on four sub-genetic operations, a tailored genetic operation is devised for effective offspring generation in the MOGP.
Besides,
to accelerate the MOGP's convergence,
we further design a prior knowledge-based initialization strategy to evolve partial individuals of the population from existing CDMs' variants.
To avoid some optimization difficulties in general MOEAs (e.g., variable-length encoding difficulty in vector-based encoding),
each architecture is first transformed into its corresponding tree architecture and
then encoded by tree-based representation for easy optimization.
Besides, a tailored genetic operation inspired from GP is suggested for effective offspring generation.
On the basis of the above techniques, the proposed MOEA turns out to be a MOGP.
To accelerate the convergence of the proposed MOGP, we further design a population initialization strategy to initialize partial individuals of the population from existing CDMs' variants.
* To validate the effectiveness of the proposed EMO-NAS-CD, we compare it with some representative CDMs on two popular education datasets.
Experimental results show that EMO-NAS-CD can find a set of architectures to build CDMs,
which present trade-offs between interpretability and performance.
The found architectures hold both significantly better prediction performance and good interpretability.
Moreover, we verify the effectiveness of the suggested genetic operation as well as the initialization strategy,
and we also demonstrate the superiority of the devised model interpretability objective over the common model complexity.
The rest of this paper is as follows.
Section II reviews existing CD approaches and presents the motivation for this work.
Section III introduces the proposed search space.
Section IV presents the details of the proposed approach.
The experiments are shown in Section V, and we give conclusions and future work in Section VI.
§ PRELIMINARIES AND RELATED WORK
§.§ Preliminaries of Cognitive Diagnosis Task
Formally, there are N students, M exercises, and K knowledge concepts in an intelligent education platform for the cognitive diagnosis task, which can be represented by S = {s_1,s_2,⋯,s_N}, E= {e_1,e_2,⋯,e_M}, and C={c_1,c_2,⋯,c_K}, respectively.
Besides, there is commonly an exercise-concept relation matrix Q= (Q_jk∈{0,1})^M× K, Q-matrix, to depict the relationship between exercises and knowledge concepts, where Q_jk=1 means the exercise e_j contains the knowledge concept c_k and Q_jk=0 otherwise.
R_log is used to denote the students' exercising response logs and it can be represented by a set of triplets
(s_i,e_j,r_ij), where s_j ∈ S, e_j ∈ E, and
r_ij∈{0,1} refers to the response score of student s_i on exercise e_j. Here r_ij=1 indicates the answer of student s_i on e_j is correct and r_ij=0 otherwise.
Based on the students' response logs R_log and Q-matrix,
the cognitive diagnosis task mines the students' proficiency in knowledge concepts
by building a model ℱ to predict the students' exercising score.
To predict the score of student s_i on exercise e_j,
the model ℱ can take three types of inputs, including the student-related feature vector 𝐡_S∈ R^1× D, the exercise-related feature vector 𝐡_E∈ R^1× D,
and the knowledge concept-related feature vector 𝐡_C∈ R^1× K,
which can be obtained by
.
𝐡_S = 𝐱_i^S × W_S, W_S∈ R^N× D
𝐡_E = 𝐱_j^E × W_E, W_E∈ R^M× D
𝐡_C = 𝐱_j^E × Q = (Q_j1, Q_j2,⋯, Q_jK)
.,
where D is the embedding dimension (usually equal to K for consistency),
𝐱_i^S ∈{0,1}^1× N is the one-hot vector for student s_i,
𝐱_j^E ∈{0,1}^1× M is the one-hot vector for exercise e_j,
and W_S and W_E are trainable matrices in the embedding layers.
Then, the model ℱ outputs the predicted response r̂_ij as
r̂_ij = ℱ(𝐡_S,𝐡_E,𝐡_C),
where ℱ(·) is the diagnostic function to combine three types of inputs in different manners.
Generally speaking, after training the model ℱ based on students' response logs,
each bit value of 𝐡_S represents the student's proficiency in the corresponding knowledge concept.
§.§ Related Work on Cognitive Diagnosis
In the past decades, a series of CDMs have been developed based on researchers' experiences in educational psychology and deep neural networks (DNNs), mainly from two perspectives.
§.§.§ Incorporating Richer Input Information
As introduced above, there are three types of inputs that can be used for the diagnostic function in a CDM,
including the student-related vector 𝐡_S, the exercise-related vector 𝐡_E, and the knowledge concept-related vector 𝐡_C.
Therefore, the first type of approaches aims to incorporate richer context information or other information into these input vectors to boost the diagnostic function inputs for improving the prediction performance.
To achieve this, Zhou et al. <cit.> proposed Educational context-aware Cognitive Diagnosis (ECD) <cit.> to model educational context-aware features in student learning.
Specifically, the student's educational contexts (e.g., school information, student personal interests, parents' education) are incorporated into the student-related vector 𝐡_S by a hierarchical attention NN.
Then, the integrated student-related vector 𝐡_S will be processed by a common diagnostic function.
The incorporated educational context information can indeed improve the diagnosis performance of different diagnostic functions, including IRT, MIRT, and NCD.
In <cit.>, Gao et al. proposed RCD to incorporate the model inputs with the prior relations between knowledge concepts.
To be specific, students, exercises, and concepts are first built as a hierarchical graph.
This graph contains a student-exercise interaction map, a concept-exercise correlation map,
and a concept dependency map that is extracted from the prior relations between knowledge concepts.
Then, a multi-level attention NN is used to achieve node aggregation of the hierarchical graph,
and the aggregated node features are used as three input vectors, 𝐡_S, 𝐡_E, and 𝐡_C, to improve the model performance.
Similarly, Wang et al. <cit.> proposed CDGK (i.e., Cognitive DiaGnosis by Knowledge concept aggregation) to incorporate the relations between knowledge concepts into input vectors.
Different from RCD, CDGK only builds the graph structure of knowledge concepts according to the dependency among knowledge concepts.
Only the leaf nodes in the constructed graph will be used to aggregate the target node's features.
Finally, the aggregated knowledge concept features will be taken as the concept-related vector 𝐡_C used for subsequent diagnosis process.
§.§.§ Designing Diagnostic Functions
The above CD approaches only focus on incorporating extra information into input vectors,
and directly employ existing diagnostic functions to handle the enhanced input vectors for diagnosis.
In contrast, the second type of approaches focuses on designing powerful diagnostic functions,
which are responsible for combining input vectors in highly interpretable manners.
As the most typical CDM, the diagnostic function of DINA <cit.> is to
first obtain two binary student and concept latent features (θ, β∈{0,1}^1× K)
and two exercise latent features (guessing g∈ R^1 and slipping sl∈ R^1) from input vectors.
Then, the score of student s_i on exercise e_j can be represented as r̂_ij = g^1-nt(1-sl)^nt,
where nt = ∏_kθ_k^β_k.
Despite the high interpretability of its diagnostic function,
DINA suffers from poor prediction performance in current CD tasks due to its poor scalability on large-scale student exercising data.
As another typical CDM, the diagnostic function of IRT <cit.> first takes
student-related and exercise-related vectors 𝐡_S and 𝐡_E,
and then transforms them into one student latent feature θ∈ R^1 and
two exercise latent features (β∈ R^1 and a ∈ R^1), respectively.
Next, a simple logistic function is applied to the linear transformation of θ, β, and a,
e.g., a simple version is Sigmoid(a(θ -β)) as stated in <cit.>.
Finally, the diagnostic function outputs the predicted scores of the student on exercises.
Similarly, MIRT <cit.> applies the same logistic function as IRT to
the linear transformation of the student latent feature θ∈ R^1× K,
the exercise latent feature β∈ R^1, and the knowledge concept latent feature α∈ R^1× K.
θ and α are equal to 𝐡_S and 𝐡_C,
and β is transformed from 𝐡_E.
Note that student and knowledge concept latent features in MIRT are multidimensional
for the demands of multidimensional data <cit.>.
Finally, its prediction process can be output as r̂_ij = Sigmoid(β+∑α⊙θ).
Compared to IRT, MIRT exhibits better performance yet without losing interpretability.
Differently, MF <cit.> is originally proposed for recommender systems but can be used for CD from the data mining perspective, where students and exercises in CD can correspond to users and items in recommender systems.
As demonstrated in <cit.>, the diagnostic function of MF can be modeled as
directly applying the inner-product to 𝐡_S and 𝐡_E.
Finally, its prediction process can be represented by r̂_ij = ∑𝐡_S⊙𝐡_E,
whose architecture is quite simple yet effective compared to other CDMs.
The most representative approach NCD <cit.> builds a new diagnostic function with one shallow layer and three fully connected (FC) layers.
Firstly, the student latent feature 𝐟_S∈ R^1× K and two exercise latent features
𝐟_diff∈ R^1× K and f_disc∈ R^1 are first obtained by
{ 𝐟_S = Sigmoid(𝐡_S)
𝐟_diff = Sigmoid(𝐡_E)
f_disc = Sigmoid(𝐡_E× W_disc), W_disc∈ R^D× 1..
Then, the shallow layer inspired by MIRT is used to linearly combine the above features and concept-related vector 𝐡_C as
𝐲 = 𝐡_C⊙(𝐟_S-𝐟_diff )× f_disc.
Afterward, the hidden feature 𝐲 is fed into three FC layers with the monotonicity property
to get the final prediction output.
Ma et al. proposed Knowledge-Sensed Cognitive Diagnosis (KSCD) to diagnose the student's proficiency.
Similar to NCD, KSCD's diagnostic function <cit.> consists of two FC layers followed by one shallow layer.
Two FC layers are used to combine the learned knowledge concept features with 𝐡_S and 𝐡_E, respectively, for obtaining the enhanced student and exercise features.
Then, the shallow layer is used to further combined enhanced features and 𝐡_C to get the prediction.
§.§ Motivation of This Work
Despite the competitive performance of the above CDMs,
their diagnosis function architectures are too simple to model complex student-exercise interactions well <cit.>,
especially for large-scale student exercising data in current intelligent education systems.
Moreover, the design of existing diagnosis function architectures heavily relies on researcher expertise in the domains of both education and NNs,
which needs a lot of trial-and-error and thus is labor-intensive and costly <cit.>.
Besides, the human design bias may make some potential yet beyond-human knowledge architectures miss.
Therefore, in contrast to current CD approaches focusing on improving model inputs,
this paper aims to develop more effective diagnostic function architectures for CD.
As an automated neural architecture design paradigm <cit.>,
NAS has been widely used for many research domains <cit.> and made significant progress
since it was first proposed by Zoph and Le in <cit.>.
Existing NAS approaches have made great achievement in various domains to
search the best architectures of prevailing various DNNs, including convolution neural networks (CNNs) for computer vision (CV) tasks <cit.>,
recurrent neural networks (RNNs) for natural language process (NLP) <cit.> and speech-related <cit.> tasks,
graph neural networks (GNNs) for non-European data tasks <cit.>,
and Transformers for CV <cit.>, NLP <cit.>, and speech-related <cit.> tasks.
As an automated neural architecture design paradigm <cit.>,
NAS has been widely used for many research domains <cit.> and made significant progress <cit.>.
Existing NAS approaches have been used to search the best architectures of prevailing various DNNs, including convolution neural networks (CNNs) for computer vision (CV) tasks <cit.>,
recurrent neural networks (RNNs) for natural language processing (NLP) <cit.> and speech-related <cit.> tasks,
graph neural networks (GNNs) for the tasks having non-Euclidean data <cit.>,
and Transformers for CV <cit.>, NLP <cit.>, and speech-related <cit.> tasks.
However, due to the difference in search space among different domains,
these NAS approaches cannot be applied to search the optimal diagnostic function architecture.
Besides, the architectures of existing diagnostic functions can be seen as a general model,
which is used to handle three given types of inputs and output a scalar or a vector.
To this end, this paper an evolutionary multi-objective optimization-based NAS approach for
automatically designing effective diagnostic function architectures to build novel CDMs.
Here, we first design an expressive search space by summarizing existing architectures,
and then we propose MOGP to explore the devised search space by optimizing the objectives of model performance and model interpretability simultaneously.
To the best of our knowledge, our work is the first to apply the NAS technique to the CD task.
§ THE PROPOSED SEARCH SPACE FOR CD
As stated above, the search space of existing NAS approaches <cit.> is task-specific, which cannot be applied to CD for searching diagnostic functions.
To design the search space for CD, we first observe and summarize existing CD approaches that design novel diagnostic functions.
Then we find that their diagnostic functions combine three types of input vectors in a linear or non-linear manner and finally output a scalar or a vector for the score prediction.
In other words, the diagnostic function architecture can be seen as a general model that has three input nodes, some internal nodes, and one output node.
Both its output node and its internal nodes are computation nodes to handle their inputs by their adopted operators.
We can find that the general model is similar to models under the search space of RNN in NAS <cit.>.
Fig. <ref>(a) plots the RNN cell found by Efficient Neural
Architecture Search (ENAS) <cit.>, where x[t] and h[t-1] are two input nodes, avg is the output node, and others are computation nodes.
By summarizing previous CD approaches, we collected some operators that be used for computation nodes of the general model. These operators are divided into two types, i.e., unary and binary operators, which are used to receive one input and two inputs, respectively.
Here computation nodes (including the output node) in the general model can only handle at most two inputs, which is different from that of RNNs.
As a result, we take the general model as the proposed search space for CD, where 15 candidate operators in Table <ref> can be adopted by each computation node and the following are their descriptions:
* Unary operators.
Each unary operator only takes one input x and returns its output. FFN_D returns the vectors, Sum, Mean, and FFN return the scalar outputs, while the other eight unary operators return the outputs having the same shape as their inputs,
which contains five arithmetic operators (i.e., Neg, Abs, Inv, Square, and Sqrt) and three activation functions Tanh <cit.>, Sigmoid <cit.>, and Softplus <cit.>.
* Binary operators.
Three binary operators considered in the general model: in addition to addition Add and multiplication Mul, we further consider a Concat operator to aggregate two input vectors into one vector.
Note that the output shapes of Add and Mul are determined by the maximal shape of two inputs.
For example, when one input is a scalar x and another input is a vector 𝐲∈ R^1× D,
the output shape is same as 𝐲 (equal to 1× D).
Here FFN and Concat are NN-based operators containing learnable parameters, which make the proposed search space more expressive than that of RNN.
Note that the general model may output a scalar y or a vector 𝐲, because the general model may adopt different operators while the output shapes of candidate operators are different.
To make the prediction process successful, the general model has to execute the following process to get the prediction score of student s_i on exercise e_j:
r̂_ij={ y, if y ∈ R^1
FC_3(FC_2(FC_1(𝐲))), if 𝐲∈ R^1× D
.,
where FC_1(·), FC_2(·), and FC_3(·) are three FC layers with output dimensions H_1, H_2, and H_3, respectively.
The three FC layers are set to hold the monotonicity property according to the experiences in <cit.>.
By doing so, the probability of a correct response to the exercise is monotonically increasing at any dimension of the student’s knowledge proficiency, which enables FC layers to hold the same interpretability as the identity operation.
For better understanding, Fig. <ref>(b) presents the general diagnostic function architecture under the proposed search space.
The general diagnostic function architecture (the general model) contains two parts.
The first part (termed CD cell) is similar to the RNN cell in NAS,
and the second part is a three-layer FC NN or an identity operation as shown in (<ref>).
The CD cell has several computation nodes (represented by ovals) and at most three input nodes (𝐡_S, 𝐡_E, and 𝐡_C, represented by triangles).
Different from the RNN cell, its output node is also computation node,
and computation nodes are selected from unary operators (denoted by green nodes) or binary operators (denoted by orange nodes).
After obtaining the CD cell's output y,
either the identity operation or the
three-layer FC NN will be applied to get the final prediction r̂_ij.
As stated in <cit.>,
a promising search space should contain not only a large number of expressive neural architectures but also as many existing handcrafted architectures as possible.
To demonstrate the effectiveness of the proposed search space,
we take four representative CDMs, including IRT, MIRT, MF, and NCD, as illustrative examples.
Fig. <ref>(a) to Fig. <ref>(d) present their diagnostic function architectures under the proposed search space.
As can be seen, these typical CDMs can be easily represented under the proposed search space
by specific computation nodes and selected input nodes.
§ THE PROPOSED EMO-NAS-CD
This section will first present the proposed EMO-NAS-CD framework,
and then sequentially give individual representation, objectives, and a tailored genetic operation.
Finally, other details are introduced.
§.§ Overall Framework of EMO-NAS-CD
The main idea of the proposed EMO-NAS-CD is to search high-performance diagnostic function architectures holding high interpretability under the devised search space.
To this end, we aim to solve the NAS-CD task by optimizing a multi-objective optimization problem (MOP), which has two objectives: model performance and model interpretability.
To avoid the difficulties of using vector-based encoding for the devised search space (e.g., variable-length encoding problem),
we propose MOGP (a popular type of MOEAs <cit.>) to solve the MOP by transforming architectures into tree architectures and encoding them by trees,
because genetic programming (GP) <cit.> can solve tree-encoding-based problems well.
The devised MOGP follows the framework of NSGA-II <cit.>, and we devise an effective genetic operation and a population initialization strategy for the MOGP.
As can be seen that the proposed EMO-NAS-CD is a MOGP-based NAS approach for CD.
Based on the classical NSGA-II <cit.>,
the main idea of the proposed EMO-NAS-CD is to search effective diagnostic function architectures holding high interpretability by maximizing the objectives of model performance and model interpretability.
Instead of using vector-based encoding for each architecture in the proposed search space,
we first transform each architecture into its corresponding tree architecture,
and then encode it by the tree-based representation,
which avoids some difficulties of vector-based encoding (e.g., variable-length encoding difficulty) in general MOEAs
and is easier to be optimized by GP <cit.>.
Moreover, we devise an effective genetic operation inspired by GP and a population initialization strategy for the proposed EMO-NAS-CD.
As a result, the proposed EMO-NAS-CD is a MOGP-based NAS approach for CD.
The overall framework of the proposed EMO-NAS-CD is summarized in Fig. <ref>, which is mainly composed of five steps.
Firstly, a population initialization strategy (in Section <ref>) is employed to randomly generate Pop individuals as population 𝐏.
Second, the standard binary tournament selection is employed to select individuals for getting the mating pool 𝐏'.
Next, a novel genetic operation is applied to 𝐏' to generate offspring individuals and form the offspring population 𝐐.
Fourth, train the architecture of each individual of 𝐐 for a certain number of (Num_E) epochs to compute its objective values.
Fifth, the environmental selection in NSGA-II <cit.> will be employed to
identify and maintain the individuals that hold better objective values from the union of population 𝐏 and offspring population 𝐐.
The second to the fifth step will be repeated until the maximal number of generation Gen is exceeded,
then the non-dominated individuals will finally be output.
For details, Algorithm <ref> also summarizes the main procedures of the proposed EMO-NAS-CD.
It is worth noting that there exist some individuals during the whole optimization process,
whose neural architectures achieve terrible performance, nearly close to random performance.
The reason behind this is that these architectures will encounter the gradient explosion problem when they continuously use some operations (e.g., Square, Tanh, and Softplus), which makes it difficult for general training paradigms to train them well.
To solve this problem, in the individual evaluation,
we adopt a simple early-stopping strategy <cit.> to stop the training of a neural architecture if its performance does not improve for several epochs.
§.§ Individual Representation
To represent architectures in the proposed search space,
vector-based encoding is naturally our first choice because of its high popularity in many real-world optimization problems.
Suppose the vector-based encoding for i-th computation node of an architecture is n_i={link_1, link_2,Op},
where link_1 and link_2 denote node n_i receiving which nodes' outputs and Op denotes which operator is adopted,
and then each architecture is represented by a set of nodes {n_i| 1≤ i ≤ num_c} (num_c denotes the number of computation nodes).
However, as shown in Fig. <ref>(b), the architectures in the proposed search space are variable. Thus it is difficult and unsuitable to represent architectures by vector-based encoding due to two challenges.
The first challenge is that num_c is not fixed but variable,
and thus the vector-based encoding of each architecture is variable-length,
which is difficult to solve by general MOEAs <cit.>.
Secondly, different from the output node of the RNN cell,
the output node in the proposed search space is a computation node and receives at most two inputs.
This poses a decision constraint in using vector-based encoding as individual representation and thus is also difficult to solve.
It can be found from Fig. <ref>(b) that there are two challenges for vector-based encoding in general MOEAs to represent architectures in the proposed search space.
Suppose the vector-based encoding for i-th computation node of an architecture is n_i={link_1, link_2,Op},
where link_1 and link_2 denote node n_i receiving which nodes' outputs and Op denotes which operator is adopted,
and then each architecture is represented by a set of nodes {n_i| 1≤ i ≤ num_c} (num_c denotes the number of computation nodes).
The first challenge is that num_c is not fixed but variable and thus the vector-based encoding of each architecture is variable-length, which is difficult to solve by general MOEAs <cit.>.
Secondly, different from the output node of the RNN search space, the output node in the proposed search space is a computation node and thus receives at most two inputs,
which poses a decision constraint in using vector-based encoding as individual representation and thus is also difficult to solve.
To avoid the above issues, we propose to utilize tree-based representation to encode architectures in our proposed search space,
and we propose MOGP to solve the MOP to search novel CDMs because of the superiority of GP in solving tree-encoding-based optimization problems <cit.>.
For this aim, we have to transform the architectures under the proposed search space into their
corresponding single-root tree architecture.
Fig. <ref> (e) gives the transform process by taking the general model as an illustrative example:
the input nodes are seen as the leaf nodes of the tree architecture, the output node is equal to the root node,
and the whole tree architecture can be seen as a single-root binary computation tree,
where the obtained tree architecture is similar to the Koza-like tree in GP <cit.>.
Based on the tree-based representation, the proposed MOGP can effectively search diagnostic function architectures but still needs the assistance of some tailored strategies, such as genetic operations and initialization strategies.
we observe that the general model can be transformed into a corresponding single-root tree architecture.
Fig. <ref> (e) gives the transform process:
the input nodes are seen as the leaf nodes of the tree architecture, the output node is equal to the root node,
and the whole tree architecture can be seen as a single-root binary computation tree,
where the obtained tree architecture is similar to the Koza-like tree in GP <cit.>.
Considering the superiority of GP in solving tree-encoding-based optimization problems <cit.>,
we adopt tree-based representation to encode architectures in our search space,
and thus the proposed MOEA turns out to be a MOGP, which needs effective tailored genetic operations.
§.§ Objectives
To make the searched architectures hold good performance and high interpretability,
the proposed MOGP is to optimize the following MOP:
max_𝒜 F(𝒜)={ f_1(𝒜) = AUC(𝒜,D_val)
f_2(𝒜) = model interpretability(𝒜)
.,
where 𝒜 denotes the candidate architecture to be optimized.
f_1(𝒜) represents the AUC (Area Under an ROC Curve) value <cit.> of 𝒜 (i.e., model performance) on validation dataset D_val.
f_2(𝒜) represents the model interpretability of architecture 𝒜,
since an architecture holding high model interpretability is preferred for CD.
To obtain reasonable f_2(𝒜),
an intuitive idea is to compute the model complexity by counting
how many computation nodes and leaf nodes are in 𝒜.
But it is not reasonable <cit.> to some extent
since much research <cit.> indicates that the model depth plays the most important role in the model interpretability.
Besides, some research on interpretable trees <cit.> further indicates that binary operators commonly provide better interpretability than unary operators. More importantly, recent CDMs prefer introducing extra inputs and more feature fusions in the models because it is easier to interpret the model performance <cit.>.
This implies that more inputs in CDMs represent higher interpretability,
further indicating that binary operators are more important than unary ones since binary operators will introduce more inputs.
However, the model complexity that counts the number of nodes in 𝒜 can not reflect the above fact.
As shown in Fig. <ref>, despite more nodes, we think 𝒜_2 holds better interpretability than 𝒜_1 due to a smaller depth.
Due to larger breadth (more inputs),
𝒜_4 and 𝒜_3 should be better than 𝒜_1 but worse than 𝒜_2.
𝒜_5 should be better than 𝒜_3 but worse than 𝒜_4 due to containing more nodes.
Even compared to 𝒜_3 having the same depth as 𝒜_1,
𝒜_1 is worse than 𝒜_3 because 𝒜_3 holds a larger breadth than 𝒜_1,
where the tree breadth is equal to the number of leaf nodes.
As can be seen from the comparisons among 𝒜_1, 𝒜_3, and 𝒜_4,
a tree holding a larger breadth means more binary operators contained in the tree,
and thus indicates the tree holds higher interpretability.
Besides, 𝒜_4 holds higher interpretability than 𝒜_5
since 𝒜_4 has fewer computation nodes.
With the above considerations, we characterize the model interpretability of architecture 𝒜 by its tree's depth, breadth, and computation node number.
The model interpretability of 𝒜 is first determined by the tree depth depth,
then by the tree breadth breadth (equal to the number of leaf nodes), and finally by the number of computation nodes num_c. As a consequence, the f_2(𝒜) can be computed by
f_2(𝒜) = (1-depth-1/10)+breadth/200+(0.001- num_c/20000),
where we make the depths of all architectures less than 10 in this paper to hold high model interpretability
and thus f_2(𝒜)∈ (0,1) has five decimal places.
The first decimal place is determined by depth,
The second and third decimal places are determined by breadth, and
the remaining decimal places are determined by num_c.
Note that three parameters (10, 200, 2000) are empirically set and can be other choices,
which will not affect the proposed approach's result as long as two criteria are met.
Firstly, the decimal place(s) determined by depth, breadth, and num_c do not affect each other;
secondly, the decimal place(s) determined by depth is most important, followed by breadth, and finally num_c.
In Fig. <ref>, the depths of five architectures are 3, 2, 3, 3, and 3,
their breadths are 1, 2, 3, 4, and 4, and their computation node numbers are 3, 3, 4, 4, and 5.
According to (<ref>),
their second objective values are 0.80585, 0.91085, 0.81580, 0.82080, and 0.82075, respectively, which are consistent with our consideration.
§.§ Genetic Operation
For effective offspring generation in the proposed MOGP, we propose an effective genetic operation based on four sub-genetic operations that modified and inspired from GP <cit.>.
The following introduces four modified sub-genetic operations: Exchange, Delete, Replace, and Insert.
* Exchange. Given two individuals, 𝐏'_1 and 𝐏'_2,
randomly select two sub-trees, t_1 and t_2,
from the trees corresponding to two individuals, respectively,
and then exchange two sub-trees to generate two new trees and form two offspring individuals, 𝐎_1 and 𝐎_2. (The root nodes will not be selected.)
* Delete. Given a parent individual 𝐏'_1,
randomly select a computation node from the tree corresponding to 𝐏'_1.
To delete this node, one of the left and right child trees of this node will be randomly connected to its parent node (if exists). The newly generated tree can form the offspring individual 𝐎_1.
* Replace. For the tree corresponding to individual 𝐏'_1,
randomly select a node to be replaced and replace the node's operator by a new operator randomly sampled from Table <ref>.
If the original operator is unary but the sampled operator is binary,
a new leaf node will be generated and connected to this node as its child tree,
where the new leaf node is randomly sampled from {𝐡_S, 𝐡_E, 𝐡_C}.
If the original operator is binary but the sampled operator is unary,
only one of the left and right child trees of this node will be kept.
As a result, offspring individual 𝐎_1 can be obtained based on the revised tree.
* Insert. A new operator is first randomly sampled from the predefined operators,
and a computation node is randomly selected from individual 𝐏'_1.
Then, the sampled operator is inserted between this node and its parent node (if exists) as a new computation node.
If the sampled operator is binary, an additional leaf node will be randomly sampled from {𝐡_S, 𝐡_E, 𝐡_C} and added to the new computation node as its child tree.
Finally, offspring individual 𝐎_1 will be generated.
Note that the root node will not be involved in Exchange since the Exchange operation will be meaningless or ineffective if root nodes are selected.
For a better understanding of the above operations,
Fig. <ref> gives some illustrative examples of generating offspring individuals.
The pink area denotes the selected computation nodes (or corresponding sub-trees) needed to be handled,
and the light purple area represents the executed changes.
As can be seen, Exchange will lead to big modifications between generated individuals and corresponding parent individuals, while other operations commonly lead to small modifications.
Therefore, the Exchange operation can be used for exploration, and others can be used for exploitation <cit.>.
Equipped with four sub-genetic operations, we empirically combine them to form our proposed genetic operation, whose basic procedures are summarized in Algorithm <ref>.
Four operations are called four sub-genetic operations because they can constitute many other genetic operations when adopting different combination manners.
In Algorithm <ref>, two individuals 𝐏'_i (i-th individual in 𝐏') and 𝐏'_i+1 are first selected from the mating pool 𝐏',
and the numbers of computation nodes in the two individuals are computed as num_c^i and
num_c^i+1 (Lines 3-4).
Second, randomly sample an integer rand from {1,2,3,4} if both num_c^i and num_c^i+1 are not smaller than 2, otherwise randomly sample rand from {3,4}.
Numbers 1, 2, 3, and 4 correspond to Exchange, Delete, Replace, and Insert , respectively (Lines 5-9).
This is because Exchange and Delete will be ineffective, even meaningless,
if there is only one computation node in the individual.
Third, the sub-genetic operation corresponding to rand will be applied to 𝐏'_i and 𝐏'_i+1 to generate offspring individuals 𝐎_1 and 𝐎_2 (Lines 10-14).
Next, the obtained 𝐎_1 and 𝐎_2 will be added to the offspring population Off (Line 15).
The first to the fourth step will be repeated until all offspring individuals are generated.
After that, an individual repair strategy in Section <ref> is used to
make offspring individuals feasible
since there exist some constraints for some operators in computation nodes of trees (Line 17).
For example, Sum, Mean, FFN, and Concat only receive vectors as their inputs.
Finally, the obtained offspring population Off is output.
§.§ Related Details
In the mating pool selection of EMO-NAS-CD,
two individuals are first randomly selected each time,
and then their non-dominated front sizes and crowding distance values are compared to keep the better one.
The computation of non-dominated front size and crowding distance for each individual is the same as for NSGA-II <cit.>.
Due to the simple topologies of tree architectures, this will generate many duplicated individuals.
To address this issue, a simple archive stores the individuals that have appeared and identifies whether a newly generated individual has already occurred.
In addition, there are the population initialization strategy and the individual repairing strategy in the proposed approach.
§.§.§ Population Initialization
Instead of evolving architectures entirely from scratch <cit.>,
we aim to introduce prior knowledge about existing CDMs' diagnostic functions into the search process.
To this end, one half of the individuals in the population are generated from four existing CDMs (IRT, MIRT, MF, and NCD) by applying the proposed genetic operation.
To maintain the diversity of the population and avoid getting trapped into local optima,
another half of individuals are randomly generated from scratch.
Here, we utilize a hyperparameter Node_range = {node_h1, node_h2} to limit the computation node number
sampled in each randomly generated individual.
Here node_h1 and node_h2 refer to the lower and upper bounds of the number of generated nodes.
§.§.§ Individual Repair
Most operators in Table <ref> can be applied to the input with any shape,
except for Sum, Mean, FFN, and Concat, which can only receive one-dimensional vectors as their inputs.
The first three operators are specially used to extract a high-level scalar feature from vectors, while Concat is specially used to concatenate and map two vectors to one vector.
Therefore, one generated individual is infeasible and needs repairing if its contained nodes are equipped with the above four operators but take scalar inputs (termed infeasible nodes).
To tackle this issue, we first execute the post-order traversal for each individual to check whether each node is feasible and then directly replace the operator of the infeasible node with other unary operators or other binary operators (e.g, replace Concat by Add, and replace Mean by Neg).
§.§.§ Complexity Analysis
The time complexity of the proposed EMO-NAS-CD is mainly determined by two components, i.e., the training of each architecture and the optimization process of NSGA-II. Suppose the size of a training dataset is |D_train|,
the time complexity of training each architecture <cit.> is O(Num_E× |D_train| × D), and the time complexity of one generation of NSGA-II is O(Pop^2) <cit.>. Therefore, the overall time complexity of EMO-NAS-CD is
O(Pop× Gen × Num_E× |D_train| × D) +O(Pop^2× Gen). Since Num_E× |D_train| × D ≫ Pop× Gen, the time complexity of EMO-NAS-CD can be regarded as O(Pop× Gen × Num_E× |D_train| × D).
On the other hand, its space complexity is mainly determined by the population and the offspring population, and each population has Pop individuals encoded by trees.
Suppose the average number of computation nodes in the trees is AvgNum,
the space complexity of an individual is O(AvgNum*3) since each node needs three numbers to specify its operation and two subtrees.
As a result, the whole space complexity of EMO-NAS-CD is O(AvgNum*3× Pop × 2), i.e., O(AvgNum× Pop × 6).
of EMO-NAS-CD is determined by hidden vectors with the size of 1× D in each architecture, where the number of hidden vectors is determined by the number of contained computation nodes and three leaf nodes.
Suppose the average number of computation nodes in each architecture is AvgNum_node,
the space complexity of EMO-NAS-CD is O(Pop× Gen ×(AvgNum_node+3)× D).
§ EXPERIMENTS
§.§ Experimental Settings
§.§.§ Datasets
To verify the effectiveness of the proposed EMO-NAS-CD, we conducted experiments on two real-world education datasets, including ASSISTments <cit.> and SLP <cit.>.
We have summarized the statistics of two datasets in Table <ref> and presented the descriptions of two datasets as follows:
* ASSISTments (ASSISTments 2009-2010 skill builder) <cit.> is an openly available dataset created in 2009 by the ASSISTments online tutoring service system.
Here we adopted the public corrected version that does not contain duplicate data.
As can be seen, there are more than 4 thousand students, nearly 18 thousand exercises, and over 300 thousand response logs in the dataset.
* SLP (Smart Learning Partner) <cit.> is another public education dataset published in 2021.
SLP collects the regularly captured academic performance data of learners during their three-year study on eight different subjects, including Chinese, mathematics, English, physics, chemistry, biology, history and geography.
The dataset contains nearly 58 thousand response logs of 1,499 students on 907 exercises.
According to the experiences of previous work <cit.>,
we filtered out students with less than 15 response logs for all datasets to ensure that
there are sufficient data to model each student for diagnosis.
§.§.§ Compared Approaches and Metrics
To validate the effectiveness of the proposed approach,
we compared the diagnostic function architectures found by the proposed EMO-NAS-CD with state-of-the-art CDMs,
including DINA <cit.>, IRT <cit.>, and MIRT <cit.>, MF <cit.>, NCD <cit.>, RCD <cit.>, CDGK <cit.>, and KSCD <cit.>.
The detailed descriptions of these comparison CDMs can be found in Section <ref>.
The source codes of most compared approaches are available at <https://github.com/orgs/bigdata-ustc/repositories>.
Note that
the results of RCD on SLP are not reported since RCD needs extra manually enhanced inputs that SLP does not have.
To measure the performance obtained by all CDMs, three evaluation metrics including AUC, accuracy (ACC), and root mean square error (RMSE) are adopted.
§.§.§ Parameter Settings
* 1. Architecture Settings The dimension D is equal to the number of knowledge concepts K,
H_1, H_2, and H_3 are set to 512, 256, and 1, respectively.
* 2. Search Settings During the search process in the proposed EMO-NAS-CD,
each student's response logs in each dataset are randomly split into 70%, 10%, and 20% as training, validating, and testing datasets, respectively.
To train the architecture encoded by each individual,
the Adam optimizer with a learning rate of 0.001 is used to optimize the Cross-Entropy loss between the prediction results and the targets,
where the size of each batch is set to 128, and the number of training epochs Num_E is set to 30.
For the proposed EMO-NAS-CD, the population size Pop is set to 100, the maximal number of generation Gen is set to 100, and the initial node range Node_range is set to {2,4}.
* 3. Training Settings
For more convincing results,
we adopted multiple different settings to split the dataset into training and test datasets for evaluating the model performance,
where the settings contain 50%/50%, 60%/40%, 70%/30%, and 80%/20% as suggested in <cit.>.
Each found architecture needs to be retrained from scratch for 50 epochs,
the settings are the same as that in the above search settings.
For a fair comparison, the parameter settings of all comparison CDMs are the same as those in their original papers to hold their best performance.
All experiments were conducted on a NVIDIA RTX 3090 GPU.
Our source code can be available at <https://github.com/DevilYangS/EMO-NAS-CD>.
§.§ Effectiveness of The Proposed EMO-NAS-CD
Table <ref> summarizes the prediction performance comparison between the proposed EMO-NAS-CD and comparison CDMs in terms of ACC, RMSE, and AUC values that are averaged on 30 independent runs on the two datasets, where five different splitting settings are considered.
Here seven architectures (with different degrees of model interpretability) found by EMO-NAS-CD in a single run are selected for comparison,
where architectures A1 to A7 are found on the ASSISTments and architectures S1 to S7 are found on the SLP.
To this end, for some architectures that have similar model interpretability,
the architecture with the best performance among these architectures will be selected for final comparison.
For more convincing explanations,
Table <ref> further shows the results of A1 (S1) to A7 (S7).
A1 refers to the average results on ten different runs of EMO-NAS-CD.
In each run, the architecture that has similar interpretability to A1 is used to compute A1, which is the same to obtain A2 to A7 and S1 to S7.
Besides, the Friedman test with Nemenyi procedure <cit.> (under significance level α=0.05) was conducted on the results of comparison CDMs and A1 (S1) to A7 (S7), which is a nonparametric statistical procedure to check whether a set of samples are statistically different.
Table <ref> summarizes the statistical results including significance analysis and rank of each method, where '1' indicates significant difference between two methods and '0' otherwise.
As can be observed from Table III and Table IV,
nearly all architectures found by EMO-NAS-CD (except for the simplest architectures A1 and S1)
exhibit significantly better performance than all comparison CDMs.
Take the results under the splitting setting of 80%/20% for analysis,
and the boxplots for AUC values (under this setting) of comparison CDMs and seven found architectures are further presented in Fig.<ref> for explicit observation.
As can be seen,
the most effective architecture A7 outperforms the current best CDM (RCD) by over 0.07 on the ASSISTments dataset in terms of the AUC value.
Even for the simplest architecture A1,
there still holds the superiority of performance over most CDMs,
which is competitive to KSCD and only worse than RCD, but KSCD and RCD use extra input information to enhance the performance.
Therefore, compared to the CDMs that do not have such input information,
the performance difference between our best-found architectures and these CDMs is more significant:
the performance leading of A7 over the best of these CDMs is up to 0.08 in terms of AUC values, and architecture A1 also outperforms these CDMs.
It can be seen that the proposed approach achieves such a tremendous performance improvement by only designing more effective architectures without extra input information.
In addition, we can find that the standard deviation of the proposed approach is very small from the comparisons
between A1 to A7 and A1 to A7.
We can make the same observations and conclusions based on the results on the SLP dataset.
For a deep insight into found architectures, we presented all non-dominated individuals found by the proposed approach on two datasets in Fig. <ref> and Fig. <ref>,
where the architectures corresponding to these individuals are further plotted in the right parts of two figures.
As can be observed,
A1 or S1 is the shallowest architecture, which holds the highest model interpretability but worse prediction performance,
while A7 or S7 is the deepest architecture, which holds the best performance but the worst interpretability among all selected architectures.
In addition, we can obtain some interesting and insightful observations from these best-found architectures on two datasets.
Firstly, from the comparisons of S1 and S2, A2 and A3, as well as A4 and A5, we can find that adding a proper activation such as Sigmoid and Softplus can enhance the model performance
without losing interpretability;
Secondly, in most shallower architectures, the exercise-related input 𝐡_E tends to be directly combined with the student-related input 𝐡_S by some binary operators,
while in most deeper architectures, 𝐡_E tends to be first combined with the knowledge concept-related input 𝐡_C and then combined with 𝐡_S.
Finally, all shallower architectures prefer FC layers as their second parts to output the final prediction,
while for the deeper architectures with better performance,
the Identity operation seems to be a more effective second part.
These deeper architectures commonly obtain the final prediction with the assistance of the Mean operator.
The above observations provide some valuable guidelines for manually designing novel CDMs.
§.§ Architecture Transferring Validation
As can be seen from Figs. <ref> and <ref>,
the two sets of selected best-architectures on two datasets are a bit different from each other.
Only architecture A1 is same as architecture S1 and similar to S2 and S4, and architectures A6 and A7 are similar to architecture S5.
To further investigate the transferability and generalization of the found architectures,
Table <ref> presents the performance of architectures A2 to A7 and
architectures S2 to S7 on the two datasets under the splitting setting of 80%/20%,
where the results of A1 and S1 are not contained since they have the same architecture.
As can be observed,
the architectures found on the ASSISTments still hold competitive performance on the SLP;
similarly, the architectures found on the SLP also hold comparable performance on the ASSISTments.
Note that architecture A5 and S2 have the best generalization to hold the most promising performance on both datasets.
§.§ Ablation Study
This section will validate the effectiveness of some devised strategies and analyze the parameter sensitivity.
In the following, only the results on the SLP dataset is presented due to higher search cost on the ASSISTments.
To verify the effectiveness of the proposed initialization strategy,
we equipped the proposed approach with other two initialization strategies to form two variant approaches, EMO-NAS-CD (random) and EMO-NAS-CD (existing).
The initial population of the former is randomly generated, and the latter initializes its population purely from existing CDMs.
Besides, we also established another variant called EMO-NAS-CD (crossover+mutation), which generates offspring by first applying the Exchange operation and then randomly applying one of other three operations.
As a result, Fig. <ref> presents the convergence profiles of hypervolume (HV) <cit.> obtained by
the proposed approach and its variants.
HV measures convergence and diversity of a population, and a large HV value indicates a good convergence and diversity.
The comparison between EMO-NAS-CD and other two variants indicates that the suggested population initialization strategy can indeed speed up the convergence and lead to better final convergence.
Besides, we can observe that the proposed genetic operation is significantly better for the proposed approach than the compared genetic operation.
The reason behind this is that successively employing two sub-genetic operations to generate offspring will cause major modifications between the generated individuals and their parent individuals,
which can promote the exploration of the algorithm but hinder the exploitation of the algorithm to some extent.
To sum up, the effectiveness of the proposed initialization and genetic operation can be demonstrated.
To validate the effectiveness of objective f_2(𝒜) of (<ref>) in assisting the proposed approach to search interpretable CDMs,
Fig. <ref> exhibits the non-dominated individuals found by two variants of the proposed approach and plots
their six representative architectures for observation.
Here, the first variant takes the model complexity as the second objective: f_2^com = 1-numc+breadth/30,
and the second variant computes the model complexity as f_2^com_dep = 1-numc+breadth+depth-1/30.
f_2^com is measured by the sum of the numbers of computation nodes and leaf nodes,
and f_2^com_dep additionally considers the influence of the tree depth (30 is a parameter used for normalizing the objective value).
As can be seen from Fig. <ref>(a), compared to the architectures in the right part,
the architectures in the left part have better performance but at the expense of a much larger increase of depth.
Besides, the architectures (located in the upper left area) are much deeper compared to the architectures with similar performance in Fig. <ref>.
The reason behind this is that the objective of model complexity prefers adding a unary operator node, whereas adding a binary operator node would introduce an extra leaf node, leading to a worse objective value.
The same observation and conclusion can be drawn from Fig. <ref>(b),
where the found architectures are still very deep.
This is because f_2^com_dep is basically the same as f_2^com
yet implicitly assigns a smaller penalty to binary operator nodes,
where the assigned penalty is still larger than the penalty assigned to unary operator nodes by f_2^com_dep.
Finally, the effectiveness of the devised model interpretability objective can be validated.
To analyze the sensitivity of the proposed approach to the framework of MOEAs and the hyperparameters Pop and Node_range,
Fig. <ref> compares HV values on the SLP obtained by EMO-NAS-CD under different hyperparameter combinations of Pop and Node_range.
According to Taguchi method <cit.>, Pop is set from 10 to 120 with step equal to 10, while node_h2 in Node_range is set from 1 to 12 with step equal to 1 and node_h1 is fixed to 2.
The original EMO-NAS-CD is under NSGA-II, but EMO-NAS-CD[NSGA-III] and EMO-NAS-CD[VAEA] are EMO-NAS-CD under NSGA-III <cit.> and VAEA <cit.>, respectively.
As can be seen from Fig. <ref>, firstly, the proposed EMO-NAS-CD is robust to the framework of MOEAs;
secondly, the proposed EMO-NAS-CD can obtain relatively good performance when the population size is greater than 80,
and it is not necessary to set Pop to 120 for a slightly higher HV value at the expense of an extra 0.2 times of cost;
thirdly, the setting of node_h2 has a big influence on the result of the EMO-NAS-CD,
and EMO-NAS-CD can obtain relatively good performance when node_h2 lies from 3 to 5.
Therefore, current hyperparameter settings for EMO-NAS-CD are good enough to some extent.
§.§ Discussion
This section will discuss three guidelines for researchers in various domains after the experiments.
The first guideline is for researchers in NAS.
To design a task-specific NAS approach, researchers should make the best of their domain knowledge to create a search space. By doing so, the search space can include existing models for the target task and many other potential models. In addition, the search strategy should also be based on the search space's characteristics and the target task's domain knowledge.
The second guideline is for researchers in CD, inspiring them on how to design effective CDMs, where the detailed guideline can be found in the last paragraph of Section <ref>.
The third guideline is for researchers interested in NAS and intelligent education.
Considering the success made by our approach, it is promising for other tasks in intelligent education to employ the NAS technique to design effective neural architectures.
Besides, researchers can borrow experiences from this paper to design the objectives of model interpretability, generalization, and robustness, formulate their multiple objectives as a MOP, and then employ a suitable MOEA to solve the MOP.
§ CONCLUSION AND FUTURE WORK
In this paper, we proposed to design novel CDMs by leveraging evolutionary multi-objective NAS.
Specifically, we first proposed an expressive search space for CD,
which contains a large number of potential architectures and existing architectures.
Then, we proposed an effective MOGP to search high-performance architectures with high interpretability in the search space by optimizing the MOP having two objectives.
To avoid some optimization difficulties, each architecture is first transformed into its corresponding tree architecture and then encoded by tree-based representation for easy optimization.
Besides, in the proposed MOGP, an effective genetic operation is designed for offspring generation,
and a population initialization strategy is devised to accelerate the convergence.
Experimental results demonstrate the superiority of the architectures found by the proposed approach to existing CDMs in terms of performance and model interpretability.
This work has shown the promising prospect of leveraging NAS for CD, but there still exist some threats to the validity of the proposed approach, including internal, external, and construct threats.
Firstly, the devised model interpretability objective is the primary internal threat.
As seen from Fig. <ref>, Fig. <ref>, and Fig. <ref>,
the proposed approach will find relatively different architectures when different model interpretability objectives are adopted. Besides, the proposed model interpretability objective is empirically designed based on some experiences from decision trees, which may limit the emergence of novel architectures as well as the application of found architectures in the real world due to a little unreasonable definition of model interpretability.
Therefore, we would like to design more reasonable model interpretability objectives in the future.
Secondly, the dataset utilized in the proposed approach is the main external threat.
We can find that the found architectures on different datasets are quite different, which indicates that the architectures found by the proposed approach on a single dataset are not general for the cognitive diagnosis task.
Besides, the size of the utilized dataset affects the search efficiency of the proposed approach,
which leads to an extremely high computation cost when a large-scale dataset is met, e.g., the search cost on ASSISTments is about 15 GPU days.
Therefore, in the future, we would like to design generalized CDMs and explore surrogate models <cit.> to reduce the search cost.
Finally, the proposed search space is the main construct threat since it is designed based on the summary of existing architectures and forces all architectures to be single-root trees. Despite high effectiveness, the current search space may limit the emergence of more potential architectures since CDMs should not always be single-root trees. Therefore, it is interesting to devise other types of search space to contain more effective CDMs.
Firstly, the proposed approach suffers from high computation cost, e.g., the search cost on ASSISTments is about 15 GPU days, and it will become severer when searching architectures for a large-scale dataset,
where the training time of each architecture is much more expensive.
Therefore, in the future, we would like to explore surrogate models <cit.> to reduce the search cost.
Moreover, our proposed search space is designed based on the summary of existing architectures and forces all architectures to be single-root trees,
which is effective to some extent but may ignore many potential architectures.
Therefore, it is interesting to devise a more effective search space and encoding strategy for CD.
IEEEtran
|
http://arxiv.org/abs/2307.04536v1 | 20230710130127 | DADO -- Low-Cost Selection Strategies for Deep Active Design Optimization | [
"Jens Decke",
"Christian Gruhl",
"Lukas Rauch",
"Bernhard Sick"
] | cs.LG | [
"cs.LG",
"cs.CE"
] |
Q-YOLOP: Quantization-aware You Only Look Once for Panoptic Driving Perception
Chi-Chih Chang11,
Wei-Cheng, Lin11,
Pei-Shuo Wang11,
Sheng-Feng Yu112,
Yu-Chen Lu112,
Kuan-Cheng Lin11 and
Kai-Chiang Wu1
1 National Yang Ming Chiao Tung University
2 Macronix International Co., Ltd.
August 12, 2023
===========================================================================================================================================================================================================================
In this experience report, we apply deep active learning to the field of design optimization to reduce the number of computationally expensive numerical simulations.
We are interested in optimizing the design of structural components, where the shape is described by a set of parameters. If we can predict the performance based on these parameters and consider only the promising candidates for simulation, there is an enormous potential for saving computing power.
We present two selection strategies for self-optimization to reduce the computational cost in multi-objective design optimization problems. Our proposed methodology provides an intuitive approach that is easy to apply, offers significant improvements over random sampling, and circumvents the need for uncertainty estimation.
We evaluate our strategies on a large dataset from the domain of fluid dynamics and introduce two new evaluation metrics to determine the model's performance.
Findings from our evaluation highlights the effectiveness of our selection strategies in accelerating design optimization.
We believe that the introduced method is easily transferable to other self-optimization problems.
Self-Optimization, Self-Supervised-Learning, Design-Optimization, Active-Learning, Numerical-Simulation
§ INTRODUCTION
High-performance computing (HPC) systems have a high energy demand, which results in significant carbon dioxide emissions contributing notably to climate change <cit.>. Numerical simulations, for instance, computational fluid dynamics (CFD) or finite element analysis (FEA), widely used in industry and research, often demand days to weeks of computing time on HPC systems, posing a particular concern in this regard. Examples include simulations for weather predictions, structural dynamics, and electrodynamics. Reducing the number of numerical simulations or accelerating them could lead to significant savings in energy consumption and the reduction of carbon dioxide emissions.
Design optimization (DO) aims to determine the optimal shape of components and typically involves many numerical simulations to identify the best design for pre-defined constraints. In recent years, deep learning methods are emerging in the field of DO to accelerate numerical simulations or to improve the overall performance <cit.>. Nevertheless, there is still the need for massive annotated training datasets. The annotations are acquired through numerical simulations, which are computationally expensive. To tackle this problem, we propose an approach to reduce the number of computer simulations required in DO processes with deep active learning (DAL) for regression. We refer to this as deep active design optimization (DADO).
In DAL, the objective is to train a deep learning model, while actively selecting and annotating the most useful samples from a non-annotated dataset <cit.>. The criteria for determining the most valuable sample depends on the specific application area and will be explicitly defined later for our use case. Unlike traditional passive learning approaches, which require a large amount of annotated data, DAL aims to reduce the annotation effort by iteratively selecting the most useful samples for annotation. In DAL, the selection is typically based on selection strategies, such as uncertainty or diversity sampling <cit.>. These strategies aim to identify samples that are expected to improve the model's performance the most. The selected samples are then annotated by an oracle, which could be a human expert or a computer simulation. The annotated samples are used to update the deep learning model, incorporating the newly annotated samples into the training process. This iterative cycle of self-selecting samples, annotating samples, and updating the model continues until a pre-defined stopping criterion is achieved. The main advantage of DAL for regression is the potential to achieve high performance with less annotated data compared to traditional supervised learning approaches <cit.>.
We conduct experiments on a real-world DO use case (cf. Figure <ref>) in the problem domain of fluid dynamics and thermodynamics, where flow deflections significantly contribute to efficiency losses in technical systems, such as piping systems for industrial heating and cooling. The objective is to discover a design that both reduces pumping power and ensures sufficient cooling capacity. This is a typical multi-objective and multi-physics optimization problem. Our approach employs DAL to reduce the number of computer simulations required for DO by selecting only the most valuable samples (i.e., those that are expected to yield the best performance gains), rather than accelerating individual simulations. We begin with a small number of randomly drawn annotated samples (i.e., designs) and a large data-pool of non-annotated samples (i.e., design candidates).
The selection strategy iteratively selects design candidates to be evaluated by the computer simulation (i.e., expert model) to provide the ground truth annotation. The objective of this approach is to maximize performance with as few requests to the expert model as possible by selecting only those design candidates that are expected to be the most valuable for the model's performance.
In typical DAL scenarios, the primary objective is to attain high predictive performance across the entire dataset.
In DADO, the primary objective is to find a multi-objective optimal solution with as few candidate evaluations as possible.
Since we are only interested in promising candidates, the predictive performance must only be high for these candidates, and it is not necessary to discriminate between mediocre and bad candidates.
Consequently, our interest lies in a prediction model that exhibits strong performance and generalization within the feature space (i.e., design space) where the optimal solution is likely to reside. Metaphorically, this concept can be linked to a shrouded mountain range, where the peaks of different mountains emerge above a dense layer of fog. Rather than focusing on the entirety of the mountain, we solely concentrate on the elevated summits. One challenge in DAL is that the selection of promising design candidates for annotating can be biased towards certain regions of the design space <cit.> which results in bad model generalization. In contrast, we deliberately induce a bias by exploiting only the most promising regions in the design space.
Thus, since conventional selection strategies are not well-suited to address our primary objective, we have developed two low-cost selection strategies that enable a model training within the relevant design space. They are characterized by their ease of implementation, low computational cost, and high effectiveness in finding promising design candidates. We refer to them as L2-Select and L2-Reject, as they select or reject design candidates based on the L2 norm. The proposed selection strategies are also applicable to other self-optimization problems and can be used to guide decision-making. Additionally, we propose two metrics tailored to the DAL regression problem to monitor and evaluate the model's performance at each iteration. Two scenarios with high and low annotation budgets with different DAL experiment parameters are investigated.
This experience report presents our proposal to address DO problems using DAL methods. In addition to the publicly available code [<https://git.ies.uni-kassel.de/jdecke/lcs4dado>] developed on an open access dataset <cit.> we provide reproducible experiments and the following contributions to the research area.
* We conduct initial research in applying DAL in the domain of DO as an optimization method to efficiently discover promising design candidates, therefore reducing the number of numerical simulations.
* We propose two novel low-cost selection strategies for multi-objective DADO. Additionally, we introduce two metrics to evaluate the model's performance.
* We make the first steps towards a deep generative active learning-based model for DO. The report also presents and discusses the challenges we encountered during this process.
The remainder of this article is structured as follows. Section <ref> briefly overviews related work, focusing specifically on deep learning in design optimization and active learning. Section <ref> delves into the considered problem domain, providing a concise discussion on design optimization and focusing on the domain of fluid dynamics. Furthermore, we introduce our dataset, outlining its relevance to our research. Moving on to Section <ref>, we present our methodology in detail, describing how we trained a deep neural network and highlighting the selection strategies employed. Section <ref> is dedicated to the experimental setup and its results, where we compare our newly developed selection strategies against random strategies, providing insightful analyses and statistical observations. In Section <ref>, we present an idea to extend the described method to include a variational autoencoder (VAE) for future research. Finally, the article is concluded in Section <ref>.
§ RELATED WORK
The optimization of design is a fundamental problem in engineering that has been extensively investigated over several decades <cit.>. Recently, there is a growing interest in employing machine learning methods to study DO problems. This interest is spurred by two factors: first, the emergence of new additive manufacturing techniques, which enable the production of free-form designs <cit.>; and second, the availability of computing power that allows the resolution of complex and relevant industrial problems <cit.>. For example, a current study shows the possibilities of combining DO and additive manufacturing of electromagnets <cit.>.
Nie et al. <cit.> proposed TopologyGAN in 2021. It is used to optimize the stress and strain in a simple truss structure by comparing it with a baseline conditional generative adversarial network. The authors generate a dataset comprising already optimized truss structures, which were dependent on the size and direction of the load. The model's generalization capability was evaluated by applying unknown load boundary conditions. Although TopologyGAN did not perform optimization, it was able to identify an optimal truss structure for changed boundary conditions.
The authors of <cit.> employed a graph neural network; with knowledge of the boundary conditions, they aim to generalize to previously unobserved or superimposed numerical meshes.
A study from 2022 investigates if anomaly detection algorithms can be used to solve DO problems <cit.>. A significant problem is the tradeoff between exploration and exploitation. The key finding is that anomaly detection can be used to explore the design space. Still, there is a great difficulty in exploitation because anomaly detection algorithms would consider a design candidate as already detected whose target value is only slightly better than an already known one. The methodology in this work seeks to focus on exploitation without compromising exploration.
Genetic algorithms (GA) such as the Non-dominated Sorting Genetic Algorithm 2 are well-established methods for solving DO problems; however, their convergence speed is rather slow <cit.>.
In 2022, Parekh et al. developed a generative model for electrical machines with multiple topologies by using VAE in conjunction with a prediction network <cit.>. They concatenated the design parameter spaces of two distinct machine topologies and trained a latent representation that was highly effective in reconstructing the input. The latent dimension employed was defined to be greater than the design parameter space of the more complex machine topology in the latent space. Consequently, the latent representation did not compress any information of the input, and we hypothesize that the network learned the identity of the input designs only. The prediction network extended the capabilities of the VAE to enable it to predict objective values in a supervised manner. The dataset of both machines used in their study included 28,278 designs, which is a considerable amount of data. In real-world scenarios, DO problems do not typically provide such a large dataset. So our approach aims to use a significantly smaller number of design candidate with the help of DAL without compromising the model's prediction performance. Unfortunately, it was not possible to reproduce and extend their ideas because the code and data were not publicly available.
To the best of our knowledge, DAL was not yet directly applied in DO. Nevertheless, Deng et al. introduce a comparable approach called Self-directed Online Learning Optimization for topology optimization in 2022 <cit.>. This approach integrates neural networks (NN) with numerical simulation data. The NN learns and substitutes the target as a function of design variables. At the same time, a small subset of training data is generated dynamically based on the NN prediction of the optimum. The NN fits the new training data and provides a better prediction in the region of interest until convergence. New design candidates selected for numerical evaluation are generated by adding a disturbance to the best design of the previous iteration, similar to mutation and crossover of GA. The main difference between the work of Deng et al. and this article is how the selection strategy performs. We focus on low-cost selection strategies, while they added disturbance to their design parameters. Furthermore, we have a vast dataset available to conduct our experiments offline. A request to the computer simulation can be answered instantaneously by drawing a design from the data-pool.
§ USE CASE
The DO methodology developed in this work is based on a use case from the field of fluid dynamics and thermodynamics, but can also be applied to other problems and domains such as aerospace engineering, automotive industry, civil engineering, architecture, and manufacturing. In aerospace engineering, DO is used to improve the performance and efficiency of aircraft components, such as wings, fuselage, and engines. In the automotive industry, DO is employed to enhance the performance and safety of vehicles, such as improving aerodynamics, reducing emissions, and increasing efficiency of electromagnets <cit.>. In civil engineering, DO is applied to optimize the design of structures such as bridges, buildings, and dams, in terms of strength, stability, and cost. In architecture, DO is used to improve building performance regarding energy efficiency, natural light, and structural integrity. In manufacturing, DO is employed to optimize the design of products, such as reducing material waste and improving production efficiency.
Our use case is a U-Bend flow channel. They can be found in various technical systems and applications, particularly those involving fluid transport or heat transfer. They are commonly employed in heat exchangers, such as condensers and evaporators, where they facilitate the transfer of heat between a fluid and its surroundings. U-bend flow channels can also be utilized in piping systems, refrigeration systems, air conditioning systems, and hydraulic systems to redirect or control the flow of fluids. The parameterization of the U-Bend is depicted in Figure <ref>. It is described with 28 design parameters and two target values.
The parameterized geometry utilizes six boundary points, illustrated in green, with each boundary point offering two design parameters that are allowed to vary within their respective dashed bounding boxes. Additionally, we incorporate 16 curve parameters to connect these boundary points. In Figure <ref>, we present exemplary the pressure distribution of a particular design candidate, obtained through numerical simulation using the expert model. In a subsequent post-processing analysis, the pressure loss is computed based on this simulated solution.
The design parameters determine the shape of the flow deflection, while the target values represent the pressure loss in [Pa] and the cooling capacity, which is quantified as the squared temperature difference between the heating surface and the cooling medium in [K^2m^2]. A small temperature difference corresponds to a high cooling capacity. The dataset comprises three distinct data formats for each design. However, for the purpose of this study, our focus lies solely on the parameter representation of the designs. This particular representation is chosen due to its streamlined and efficient nature, making it ideally suited for our methodology. The data is freely available and can be found in <cit.>, providing additional information on this specific use case and the numerical investigations to obtain the data.
§ METHODOLOGY
§.§ DAL Process
We present the methodology in Figure <ref>. The DAL process starts by randomly selecting initial_size designs candidates for training X_train_0 (depicted as a grey box) from a data-pool (depicted as a blue box). Based on the design candidates X_train_{i}, the Expert Model determines the corresponding target values y_train_{i}. Where i is the iteration loop count, indicating how many times the process has looped.
Subsequently, the design candidates and the target values are used to train the Meta Model in a supervised manner. After training, the Meta Model predicts the target values of a draw_size large number of design candidates X_draw_{i} (depicted as green box). These predictions are passed to the Selector. Draw_size many random design candidates X_draw_{i} are bootstrapped in every iteration. Based on the selection strategy, the Selector chooses a subset of design candidates X_aq_{i} with the acquisition size aq_size. The Expert Model determines the true target values and the iteration loop finishes by adding the newly acquired designs to the training dataset. Each training cycle starts with newly initialized weights of the Meta Model. This loop iterates until a defined number n_iter is achieved.
Expert Model: The Expert Model is not directly needed in this work, since a large annotated data-pool is available. Therefore, the Expert Model can be simulated to ensure a simple and fast pool-based experimentation and evaluation. However, the introduced experimental procedure can be used in an online setting where the Meta Model has to generate annotations on-the-fly.
Meta Model: This study utilizes a multi-layer perceptron (MLP) as the Meta Model, with the first hidden layer consisting of 200 neurons and the second hidden layer comprising 100 neurons. A leakyReLU activation function and a dropout layer are applied to each hidden layer to enhance the model's generalization performance. The dropout rate is set to a constant value of 0.1. For the regression task, the output layer consists of one linear neuron for each of the two target values. Both the learning rate and the batch size are kept constant. The value for the learning rate is set to 0.0005 and the batch size to 4. An early stopping criterion in case the training error does not reduce further after 10 epochs is performed. The hyperparameters were determined based on results of preliminary studies. The weights of the best performing epoch are reloaded to evaluate the model's performance. At each process iteration, the model is trained from scratch to avoid potential bias to data selected in earlier iterations <cit.>.
§.§ Selection Strategies
We developed two simple but efficient selection strategies for DADO named L2-Select (L2S) and L2-Reject (L2R). These strategies can be characterized as simple in the sense that they are model-agnostic and they solely necessitate a point estimate for their targets. With these selection strategies, there is no need to rely on complex, computationally expensive and sensitive methods for uncertainty modeling.
First, draw_size design candidates are bootstrapped from the entirety of the non-annotated data-pool to prevent test data leakage and to ensure an unbiased test of the model after each iteration. Subsequently, the target values y⃗_⃗n⃗ of these design candidates are determined by the Meta Model.
The goal of the strategies is to choose aq_size candidates from the target value set y_draw_i:
y_draw_i = {y⃗_⃗n⃗ | y⃗_⃗n⃗∈ J_1 × J_2}
with |y_draw_i| = draw_size where J_1 and J_2 represent the objectives (for the use case J_1: pressure loss, J_2: cooling performance).
For the L2S selection strategy, the aq_size design candidates with the smallest magnitude (or L2-norm) of the target value vector y⃗_⃗n⃗ are selected, cf. Equation (<ref>). The L2-norm
L2S(y⃗) = |y⃗| = √(∑_j=1^num_obj y_j^2)
{ L2S(y⃗_⃗n⃗) | y⃗_⃗n⃗∈ y_draw_i, L2S(y⃗_⃗n⃗) ≤ L2S(y⃗_⃗n⃗+⃗1⃗) }
is calculated by the square root of the sum of the squared elements y_n,j of the target vector as shown in Equation (<ref>). A graphical interpretation of this strategy is provided in Figure <ref>.
L2R uses an adapted variant of the L2-norm based on the design candidate in the draw_size that has the largest predicted target values y_max,j as its origin.
L2R uses an adapted variant of the L2-norm where the origin corresponds to the maximum values of the currently considered design candidates from y_draw_i, cf. Equation (<ref>).
Equation <ref> shows the adapted L2-norm and its graphical interpretation is given in Figure <ref>.
Instead of selecting the design candidate with lowest values for L2R(y⃗_⃗n⃗), the first draw_size - aq_size design candidates are rejected, cf. Equation (<ref>). Therefore, we select the remaining aq_size design candidates which are not rejected.
y_max,j = max{y_n,j}, y⃗_⃗n⃗∈ y_draw_i
L2R(y⃗) = √(∑_j=1^num_obj (y_j - y_max,j)^2)
{ L2R(y⃗_⃗n⃗) | y⃗_⃗n⃗∈ y_draw_i, L2R(y⃗_⃗n⃗) ≤ L2R(y⃗_⃗n⃗+⃗1⃗) }
When comparing the two selection strategies in more detail, the differences in the choice of design candidates can be highlighted more clearly. In Figure <ref>, 400 design candidates are plotted following a multivariate Gaussian distribution. The selection strategy separates the aq_size selected design candidates from the unselected design candidates which are shown in blue. Design candidates selected by both selection strategies are indicated in purple, and design candidates marked with a red or green show the differences of the selected design candidates.
We propose the L2R as an alternative to the L2S strategy because it may offer several advantages. Firstly, we assume that L2R effectively accounts for design candidates that reside at the edges of the target space, which are often overlooked by the L2S strategy. Additionally, the selected design candidates using L2R are more likely to correspond to a Pareto front, which is a key objective in multi-objective optimization. In contrast, design candidates drawn from the core of the distribution are less likely to offer diversity in the design space, assuming L2R to be the preferred selection strategy. We compare these selection strategies against each other and a random selection.
§ EXPERIMENTS
§.§ Setup
We define two experimental scenarios. One is a low-budget (S1) experiment and the other is a high-budget (S2) one. The main difference between the two scenarios is the amount of initial design candidates X_train_0 for training and the number of design candidates X_aq_i added to the training dataset per iteration.
S1 holds an initial_size of 100 design candidates, its draw_size is set to 400 and the selected subset aq_size is variable in a set of {10, 20, 25, 50} design candidates per iteration until a budget of 500 design candidates is exhausted. We selected the experimental parameters for our DAL experiments based on the observation that datasets in the domain of DO are generally very small. Thus, the parameters for experiment S1 were chosen to represent a real-world scenario.
Scenario S2 consists of 500 initial design candidates X_train_0 and acquires {50, 100, 125, 200} X_aq_i per loop execution from its draw_size of 2000 design candidates until a budget of 1500 is reached. In S2, we show the process again with a larger budget as it is typically used for DO, but the amount of data can still be considered to be very small for deep learning applications.
All experiments for the multi-objective optimization are performed using the L2S, the L2R and a random selection strategy. In addition, each experiment is performed with 5 different random seeds to ensure a representative evaluation. The different DAL experiment parameters that are investigated are summarized in Table <ref>.
To evaluate the experiments presented, we employ various metrics, including the mean square error (MSE), the spearman rank-order correlation coefficient (SROCC), as well as the mean rank (MR) and intersection metrics, which we introduce below. The results of the conducted experiments are summarized with the help of the area under the learning curve (AUC) metric in Table <ref>. This allows us to evaluate an entire optimization process in a single value per metric. To calculate the SROCC and MR metrics, it is necessary to sort the target value y⃗_⃗n⃗ of the estimated y_draw_i and the true values y_draw_i_true according to their quantity.
To do so, we sort the currently drawn candidates X_draw_i based in the current selection strategy S (i.e., L2S, L2R, random) and the true performance values, cf. Equation (<ref>).
The set K contains the indices from the sorted candidates set that correspond to the candidates in X_aq.
The MR metric is then the average of the first aq_size indices of the set K.
Additionally, we normalized the MR to be between 0 and 1, where 0 corresponds to its optimal value depending on its aq_size and the MR after the first process iteration which is to be assumed the highest value of the process.
The optimal value of the MR metric would therefore result in aq_size · 1/2,
K = { k | x⃗_⃗n⃗ = z⃗_⃗k⃗, x⃗_⃗n⃗∈ X_aq,
z⃗_⃗k⃗∈sort(X_draw_i, S(y_draw_i_true)) }
MR = 1/aq_size∑_k ∈ K k
The SROCC is calculated using the first aq_size indices of both sorted lists as input and outputs a value between 0 and 1, where a value of 1 indicates that the aq_size design candidates of the predicted values match the correct sorting of the true values.
The intersections metric assesses the accuracy of the top-rated designs.
The metric is relatively simple. It compares the aq_size selected candidates X_aq_i
against X_aq_i_true⊂ X_draw_i, the aq_size candidate selected based on the ground truth performance.
The intersection of both sets can be used to directly calculate the accuracy which is based on the cardinality of the intersection, cf. Equation (<ref>).
The name intersection for the metric is based on the intersection operation.
intersection = |X_aq_i ∩ X_aq_i_true|/aq_size
We prioritize SROCC, MR, and intersections metrics over classic MSE for DADO, as accurate ranking of designs is more crucial than precise estimations of target values. With the true ranking of the designs, the true y_aq_{i} values are calculated using the Expert Model. Nevertheless, we assume that DAL will lead to an improvement of the MSE of the added designs X_aq_{i} after each iteration.
§.§ Results
In Table <ref>, we present the AUC and the final value at the end of the process for all experiments and metrics. S1 is highlighted in green and S2 in blue, while the three selection strategies are differentiated by varying shades of gray. The results indicate that L2S outperforms L2R in every single experiment and that the random strategy consistently yields the worst results, except in the case of rnd_MSE which is the MSE between y_draw_i and its true annotations. Additionally, the quality of the results does not simply increase with its aq_size, for S1. For S2, the experiment with the smallest aq_size based on the best_MSE, the MR and the SROCC provide predictions with the highest performance. The best_MSE is the MSE between y_aq_i and its true annotations.
The superiority of the random selection strategy in the rnd_MSE metric is attributable to the bias that we attempt to impose through our selection strategies, whereby the model's predictions in regions where the selection strategies assume promising values are expected to yield a higher performance. As such, it is reasonable that models that are trained using the design candidates suggested by L2S and L2R would perform worse in other regions. The best_MSE identifies the MSE that was evaluated on the selected design candidates. This metric monitors the predictive performance of our model. To look at the results in more detail, we have selected an experiment for S1 and S2 which we would like to discuss in more detail below. In Figure <ref> shows the results for S1 with a aq_size of 25 design candidates per iteration. Since our initial_size is 100 design candidates and our budget is 500 design candidates in total, we iterate 16 times.
In Figure <ref>, we present the intersections metric, the SROCC in Figure <ref>, the rnd_MSE in Figure <ref>, and the best_MSE in Figure <ref>. The lines represent the mean values obtained from the five runs conducted for each experiment. In addition to the mean values, the plot displays the standard-error intervals for each metric and selection strategy. Throughout the course of the experiment, it is evident that the intersections and the SROCC show an increasing trend, while the rnd_MSE and the best_MSE exhibit a decreasing trend. Although the random strategy shows good predictive performance on the randomly selected design candidates, it underperforms compared to the two selection strategies on the promising design candidates. The benefits of the low-cost selection strategies become apparent upon examining the following metrics. The intersection metric shows that the process develops a self-awareness in the course of the iterations and is increasingly able to select suitable design candidates for multi-objective DO. However, it becomes apparent that the intersection metric is too strict for random selection, and despite being able to improve, models trained on the basis of random selection are still unable to satisfy this metric. Therefore, the MR and the SROCC are introduced as alternative metrics. While the visualization of the MR has been omitted due to limited space, it is proven to be a useful metric for comparing experiments (see Table <ref>). The SROCC shows a similar qualitative trend as the intersection metric, with L2S outperforming L2R and the random strategy. However, it also reveals that the random strategy improves in sorting the draw_size design candidates by rank based on their target value over the iterations, which is not reflected by the intersection metric. Unexpectedly, the L2S strategy outperforms the L2R strategy, which may be attributed to the nature of the available data. The selection strategy was originally designed for a multivariate Gaussian distribution; however, as illustrated in Figure <ref>, the two scaled target values of the real data do not conform to a Gaussian distribution, hence the solution quality of L2S exceeds that of L2R in this use case.
As stated before, the S2 experiment with an aq_size of 50 produced the best results. Therefore, we will examine the results of this experiment more closely in the Figure <ref>. Since S2 had a budget of 1500 design candidates, this implies that 20 iterations were completed.
When examining the results from S2, it becomes evident that the standard-error intervals in the experiments are considerably reduced due to the larger budget. Additionally, the metrics are notably improved when compared to S1. The disparities between the selection strategies mentioned earlier are also more distinct in evaluating S2 but are in line with the outcomes previously discussed for S1. Detecting any noticeable variation in prediction quality based on the rnd_MSE is challenging for both L2S and L2R. The best_MSE values exhibit almost identical patterns and trends. Nonetheless, differences in the performance between L2R and L2S can be observed with the aid of the intersection and the SROCC metrics. Notably, a decline in the slope of the curve can be inferred with random selection, as indicated by the SROCC. Also noteworthy is the high fluctuation of the best_MSE in random selection, from which it can be concluded that the prediction performance on the selected design candidates is considerably lower.
In comparing the four metrics between the final iteration of S1 (cf. Figure <ref>) and the initial iteration of S2 (cf. Figure <ref>), a considerable performance improvement is observed in favor of S1, despite both scenarios having an equal budget at that stage. This finding supports the effectiveness and benefits of our methodology in DO
§ TOWARDS GENERATIVE DEEP ACTIVE DESIGN OPTIMIZATION
Based on our confidence in the feasibility of performing self-optimizing multi-objective optimization using DAL, we aim to augment the Meta Model in the presented process with a VAE. Similar to Parekh et al. <cit.>, we extend the VAE with an additional prediction network and, thereby, perform a multi-task regression and reconstruction model. As described in Section <ref>, we believe that their VAE is exclusive learning the identity of the two motor topologies. The reason for that is the chosen size of the latent space and the fact that the used trainingset does not represent a real-world DO scenario.
Our idea is to embed the VAE into the DAL process presented above. As a selection strategy, a clustering approach in the latent representation shall be applied to separate areas of promising design candidates from other less well-performing design candidates. The generative properties of the VAE will then be used to specifically generate new design candidates which belong to the promising area of latent space. The smaller the latent size is, the easier it will be for clustering methods to separate these areas, but the more challenging the subsequent reconstruction of the design candidates might be. The prediction network is parallel to the decoder of the VAE in the latent space. With the help of this additional network, the latent space can be divided based on the predicted target values, to enable clustering.
Although numerous experiments have been conducted to optimize the structure and hyperparameters of the VAE, a suitable trade-off between a well-separable latent space and a reconstruction error, that is not excessively large, has yet to be found. If the reconstruction error is too large, it is expected that there will be a high deviation between the prediction of the Meta Model and the Expert Model. On the other hand, if the prediction performance is too low, patterns cannot be detected in the latent space. This issue may be attributed to the weighting factor β, which determines the influence of the Kullback Leibler divergence. Several approaches have been explored, such as introducing a cyclical annealing schedule <cit.> for β, to address the reconstruction and separability trade-off. However, no clear trend can be observed in Figure <ref>, which displays the absolute deviation of the reconstruction of eight random test design candidates. Each of the 28 boxes, arranged in a 7x4 grid, represents a design parameter.
In our next steps, we plan to introduce cyclic training for the reconstruction and prediction tasks. We believe that the VAE, as a Meta Model, will play a central role in DADO, and therefore, we consider it the focus of our future work.
§ CONCLUSION
In this experience report, we have demonstrated the feasibility of utilizing DAL to tackle DO, leveraging a large pool of non-annotated data. The developed DAL selection strategies for regression applied to a multi-objective DO have shown promising results. The results remain consistent across all of the experiments. The conducted experiments show that based on rnd_MSE metric the performance of the random selection strategy surpasses both of our selection strategies. This outcome is, however, unsurprising, as the MSE of the prediction is computed based on randomly selected design candidates from the entirety of our data-pool. Nevertheless, our objective is to bias the model to perform well on the self-selected design candidates, which are advantageous for self-optimization by proposing promising design candidates only. Our assumption that the developed L2R selection strategy would outperform the L2S strategy due to its more Pareto-like selection was not confirmed by the experiments. The reason for this is that the method was developed using a multivariate Gaussian distribution. In our dataset, the assumption of a Gaussian is not fulfilled. Especially, for small draw_sizes. Both strategies presented in this article rely on the L2-norm, which selects a sample circularly around an origin. To improve the robustness of the selection to differently scaled target values, one possibility is to replace the circular selection with an ellipsoid. Our study demonstrated that the selection strategies are providing promising results with two target values, an extension to higher dimensional multi-objective optimization should be straightforward.
Further, we have shown the limitations of incorporating a generative model into the DADO process. We plan to develop a selection strategy based on a clustering procedure in the latent space, once we have achieved a good balance between reconstruction and disentanglement in the latent space.
Subsequently, we propose the integration of the complex numerical simulations into the process, described in this work, enabling real-time generation of annotations for design candidates outside the existing data-pool. Moreover, there is clear potential in exploring the Meta Model and its hyperparameters to enhance prediction quality and accelerate DO, which was not the focus of this study. An investigation of the raw data that is available in numerical simulations in the sense of numerical meshes could be investigated using graph neural networks in order to determine if another data representation is advantageous for performance predictions in DADO.
With our research endeavors, we seek to make a contribution towards the reduction of CFD and FEA simulations on HPC systems. By reducing the number of such simulations, we aim to effectively reduce the associated energy costs and mitigate associated climate-damaging emissions, thus promoting a more sustainable and environmentally conscious approach for future computational simulations.
§ ACKNOWLEDGMENT
We express our gratitude to Dr. Franz Götz-Hahn for the insightful discussions.
ieeetr
|
http://arxiv.org/abs/2307.06081v1 | 20230712110519 | Navigating the Complexity of Generative AI Adoption in Software Engineering | [
"Daniel Russo"
] | cs.SE | [
"cs.SE"
] |
[email protected]
0000-0001-7253-101X
Department of Computer Science, Aalborg University
A.C. Meyers Vaenge, 15, 2450
Copenhagen
Denmark
This paper explores the adoption of Generative Artificial Intelligence (AI) tools within the domain of software engineering, focusing on the influencing factors at the individual, technological, and social levels. We applied a convergent mixed-methods approach to offer a comprehensive understanding of AI adoption dynamics. We initially conducted a structured interview study with 100 software engineers, drawing upon the Technology Acceptance Model (TAM), the Diffusion of Innovations theory (DOI), and the Social Cognitive Theory (SCT) as guiding theoretical frameworks. Employing the Gioia Methodology, we derived a theoretical model of AI adoption in software engineering: the Human-AI Collaboration and Adaptation Framework (HACAF). This model was then validated using Partial Least Squares – Structural Equation Modeling (PLS-SEM) based on data from 183 software professionals.
Findings indicate that at this early stage of AI integration, the compatibility of AI tools within existing development workflows predominantly drives their adoption, challenging conventional technology acceptance theories. The impact of perceived usefulness, social factors, and personal innovativeness seems less pronounced than expected. The study provides crucial insights for future AI tool design and offers a framework for developing effective organizational implementation strategies.
<ccs2012>
<concept>
<concept_id>10003456.10003457.10003458</concept_id>
<concept_desc>Social and professional topics Computing industry</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10003456.10003457.10003490</concept_id>
<concept_desc>Social and professional topics Management of computing and information systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003456.10003457.10003490.10003491</concept_id>
<concept_desc>Social and professional topics Project and people management</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[300]Social and professional topics Computing industry
[500]Social and professional topics Management of computing and information systems
[500]Social and professional topics Project and people management
Navigating the Complexity of Generative AI Adoption in Software Engineering
Daniel Russo
Received: 2 February 2023 / Accepted: 4 July 2023
===========================================================================
§ INTRODUCTION
The transformational promise of Artificial Intelligence (AI) is becoming increasingly evident across various sectors, with AI models demonstrating human-like competencies in areas as diverse as natural language understanding and image recognition <cit.>. One domain where this potential is particularly salient is software engineering, a critical function within contemporary organizations. This significance is underscored by the increasing pervasiveness of software in a broad range of products and services, with digital features enhancing their value <cit.>. Recent technological advancements are leading to a reframing of computer languages as just another language, thereby opening new vistas of opportunities in the form of Generative AI and Large Language Models (LLMs) within the entire software development lyfecycle.
For instance, AI tools have the capacity to serve as valuable allies to software developers and product managers in the initial phases of new feature development. Through the use of Generative AI, these tools can examine, distill, and label voluminous data sources, such as user responses, market trends, and system logs.
In the domain of systems analysis and design, AI-based tools can generate a myriad of IT architectural designs and rapidly adjust potential configurations, thereby streamlining the system design process and accelerating product launches.
During the coding phase, AI tools can autonomously generate code, reducing development time by assisting in the crafting of initial drafts, swiftly detecting cues, and serving as a readily accessible knowledge bank.
Parallelly, in the testing phase, AI tools possess the capability to generate test cases and automate specific testing functions.
Lastly, in system maintenance, software engineers can leverage AI-derived insights from system logs, user feedback, and performance metrics to assist in identifying problems, suggesting solutions, and predicting other areas necessitating substantial improvement.
The implications of AI for the field of software development could be momentous, with predictions indicating a surge in productivity ranging from 20 to 45 percent <cit.>. This substantial increase could be achieved by streamlining traditional tasks like crafting preliminary code drafts, refining existing code structures (refactoring), or conducting thorough root-cause analyses. The integration of AI not only reduces the time commitment for these activities but also enhances the overall efficiency and effectiveness of the software development process <cit.>.
Nonetheless, despite the prospective advantages, the incorporation of language models into software engineering appears to be intricate and fraught with challenges. Indeed, there are even indications that usage of Large Language Models is on the decline, possibly as a result of end-user experimentation that found them to be ill-suited to their requirements <cit.>.
Consequently, a pressing need exists to unravel the core determinants influencing the adoption of Generative AI tools, such as LLMs. As we know, a diverse range of elements shape the modality and rationale behind software engineers' decision to employ language models. These incorporate both technical components, such as model quality and performance, and non-technical components, including perceived utility and ease of use <cit.>.
Yet, there has been limited empirical research on the factors that influence language model adoption in software engineering.
Hence, we formulate our research questions as follows:
Research Question:
What influences the adoption of Generative AI tools in software engineering?
In our endeavor to explore our research question, we have applied a convergent mixed-methods approach, investigating the adoption of Generative AI and Large Language Models within the sphere of software engineering.
To frame our understanding of AI adoption, we referenced three principal theoretical frameworks examining individual, technological, and social-level influences. These frameworks included the Technology Acceptance Model (TAM) <cit.>, the Diffusion of Innovations theory (DOI) <cit.>, and the Social Cognitive Theory (SCT) <cit.>. By incorporating these well-validated theories, we could thoroughly comprehend the determinants of language model adoption and investigate the distinct ways these variables are operationalized in the software engineering domain.
We initiated our research by conducting a structured interview study with a cohort of 100 software engineers. The design of these interviews was influenced by the main dimensions of our selected theories.
The collected data was analyzed using the Gioia Methodology <cit.>, facilitating the development of our preliminary theoretical model.
This provisional theoretical model was then validation using Partial Least Squares – Structural Equation Modeling (PLS-SEM), supported by data collected from 183 software professionals. The convergence of insights derived from this comprehensive and multifaceted investigation enhances our understanding of AI adoption within software engineering. Moreover, by understanding the adoption dynamics and impact of these disruptive technologies, this research holds potential to guide the design of future AI tools and offer pertinent recommendations for organization-wide implementation strategies.
The structure of this article unfolds as follows. In Section <ref>, we delve into a comprehensive review of related works. Our mixed-methods investigation commences with an initial theory induction, a process we detail thoroughly in Section <ref>. We then analyze the results of this process in Section <ref>. From these findings, we craft our theoretical framework in Section <ref>, elucidating its hypotheses. Our model undergoes a rigorous validation process using Partial Least Squares - Structural Equation Modeling, as reported in Section <ref>. In the concluding sections, we reflect on the broader implications and potential limitations of our study in Section <ref>, and sketch out future research trajectories in Section <ref>.
§ RELATED WORK
Large Language Models, such as Generative Pre-trained Transformer (the GPT family) developed by OpenAI <cit.>, and its subsequent architectures, have revolutionized the field of natural language processing. They have the capacity to generate human-like text and have a broad understanding of language and facts about the world, yet they are essentially complex mathematical models trained on large amounts of data. The genesis of LLMs as Generative AI emerged from the capacity of these models to generate new content that is highly coherent, contextually relevant, and linguistically sophisticated <cit.>. A primary reason behind their evolution as Generative AI is due to the integration of Transformer architectures <cit.>, which significantly improved their understanding and generation of long-range context. Additionally, the availability of massive amounts of text data and computational power has enabled these models to learn from a broad context, thereby making them efficient text interpreters and generators <cit.>.
This is why we are using Generative AI and Large Language Models interchangeably throughout this paper.
A multitude of academic disciplines are currently exploring the potential implications of such advanced technologies within their respective fields. This section will delve into the particular context of software engineering, discussing the prevalent themes and concerns associated with AI tools.
§.§ Assessing Generated Code: Correctness and Quality
A prominent strand of inquiry involves assessing the accuracy and quality of code generated by AI systems such as Copilot. Studies by Nguyen and Nadi, Dakhel et al., and Yetistiren, all conducted empirical assessments to evaluate the correctness of the code generated by Copilot, and found varying degrees of success depending on the programming language and the complexity of the task <cit.>. This shared focus on evaluation signifies the importance of assessing the functional integrity of the code generated by AI tools, which is a fundamental concern in software engineering.
§.§ Evaluation Criteria: Diverse Approaches
While there were commonalities in evaluating Copilot's performance, the specific aspects of evaluation varied among the studies. Nguyen and Nadi focused on the performance of Copilot across different programming languages <cit.>, while Mastropaolo et al. investigated the robustness of Copilot in relation to semantic-preserving changes in the natural language description <cit.>. Yetistiren conducted a comprehensive assessment of the generated code in terms of validity, correctness, and efficiency <cit.>. These differences underscore the multifaceted nature of AI code generation and the various dimensions that need assessment.
§.§ Enhancing Productivity
Productivity in software development is positively impacted by the integration of AI tools, with tools like Copilot significantly increasing the speed of code production <cit.>. While these tools are lauded for their productivity-enhancing capabilities, understanding their performance and limitations is vital to leveraging their potential effectively.
Studies by Tian et al. <cit.> and Camara et al. <cit.> shed light on the abilities and constraints of Large Language Models, such as ChatGPT. Empirical evaluation suggests that while these models demonstrate aptitude in simpler, well-structured tasks, they tend to struggle with complex tasks involving semantic nuance <cit.>. Additionally, a significant relationship was identified between the length of the input sequence and the quality of the output, with longer inputs often leading to poorer results <cit.>.
These findings highlight the nuanced role of AI in software development productivity. While they enhance speed, the complexity of tasks and the length of input sequences can serve as limiting factors, pointing to areas for further improvement and optimization in these models.
§.§ Comparing Methods
Distinctly, Sobania et al. embarked on a comparative study between Copilot and genetic programming, another approach in automatic program synthesis. They concluded that, despite comparable performances, genetic programming was not as mature for practical software development as Copilot <cit.>. This comparative analysis provides a unique perspective on the landscape of automatic programming methodologies.
§.§ Pedagogical Concerns
Pedagogical Concerns have been discussed by both Wermelinger and Dakhel et al.'s studies touched upon the implications of AI tools like Copilot in educational settings. While Wermelinger explored the implications of Copilot on teaching and assessment methods in programming courses <cit.>, Dakhel et al. discussed the potential challenges for novice developers who might fail to filter Copilot's non-optimal solutions due to lack of expertise <cit.>. These similarities highlight the significant pedagogical implications of integrating AI tools in education.
§.§ AI's Influence on Software Development Process
Concerns around the integration and security of AI tools like GitHub Copilot are a shared finding between Jaworski and Piotrkowski <cit.> and Zhang et al. <cit.>. Both studies illustrate that despite the potential benefits of these tools, developers express hesitation and face challenges when incorporating them into their workflows, due to integration difficulties and security worries.
However, these studies diverge when examining developer interactions. Zhang et al. detail more practical aspects like programming languages and IDEs used with Copilot, whereas Jaworski and Piotrkowski focus on the developers' sentiment towards AI tools.
Ernst and Bavota <cit.>, although also discussing the complexities of AI integration, differ by highlighting additional challenges related to legal compliance and bias. This broadens the conversation on AI's impact on software development beyond technical aspects to include ethical and legal considerations.
Another commonality, albeit from a different angle, is found in Bird et al. <cit.> and Mozannar et al. <cit.>. Both studies touch on the evolving role of developers as AI tools become more pervasive. Bird et al. suggest a shift towards developers spending more time reviewing AI-generated code, whereas Mozannar et al. provide a structured analysis of developer interactions with AI tools, revealing inefficiencies and time costs.
Thus, while the studies largely converge on the transformative potential and challenges of AI tools in software development, they also bring unique perspectives to the table, expanding the discourse to include aspects like legal concerns, workflow changes, and time costs.
§.§ Community Influence and Trust in AI Tools
The role of the community in shaping developers' trust in AI tools is investigated by Cheng et al. <cit.>. They present a detailed analysis of how online communities, such as forums and discussion groups, influence developers' perception of AI tools. Their research indicates that shared experiences and collective discussions play a significant role in shaping developers' trust in AI assistance.
§.§ Usability of AI Programming Assistants
The usability of AI programming assistants has been a focal point in the research, with the key motivations for usage identified as reduction in keystrokes, quick task completion, and syntax recall <cit.>. However, developers often encounter challenges with tool control, output alignment with requirements, and difficulties with understanding, editing, and debugging generated code snippets <cit.>.
For novice programmers, cognitive and metacognitive issues arise while using these tools for assignments, indicating a need for better supportive design <cit.>. Also, developers exhibit distinct interaction modes, each requiring different forms of tool support <cit.>.
This signifies a necessity for usability improvements in AI programming assistants, focusing on user control, cognitive effort minimization, and support for interaction modes.
In sum, the review of related work underscores the transformative potential and multifaceted challenges posed by Large Language Models in the realm of software engineering. The corpus of research spans areas such as the evaluation of generated code's accuracy and quality, the augmentation of productivity, contrasting methodologies, pedagogical implications, AI's influence on software development processes, the role of community in fostering trust, and the usability of AI programming assistants. Each study contributes uniquely to our understanding of AI's role in software engineering, highlighting the complexity of the issues at hand. While the field has made significant strides in leveraging AI's potential, the need for robust evaluation, tailored usability, and mindful integration into educational and professional settings is a recurring theme.
More specifically, while existing research thoroughly investigates the performance, usability, and impact of Generative AI tools in software engineering, it primarily focuses on the tools themselves, largely overlooking the factors influencing their adoption. A notable exception is Cheng et al.'s <cit.> exploration of community influence on developers' trust. Yet, this is only one facet of the broader adoption landscape, which includes individual, organizational, technological, and environmental factors. Our research question addresses this clear gap in the literature, aiming to provide a comprehensive understanding of the factors driving or hindering the adoption of these tools.
§ THEORY GENERATION
§.§ Theoretical foundation
The multifaceted nature of technology adoption demands the application of comprehensive theoretical frameworks that can sufficiently capture and explain the influencing factors. The Technology Acceptance Model (TAM), Diffusion of Innovations Theory (DOI), and Social Cognitive Theory (SCT) together offer a robust approach towards understanding the complexity of language model adoption in software engineering.
§.§.§ Technology Acceptance Model (TAM)
TAM has been widely acclaimed for its relevance and efficiency in predicting and explaining the acceptance of various forms of technology <cit.>. Its core constructs—perceived usefulness and perceived ease of use—serve as an excellent starting point for understanding adoption behavior. For instance, if language models are perceived as beneficial and easy to use, software engineers are more likely to embrace them. Given its robustness and simplicity, TAM provides a foundation for understanding the fundamental determinants of technology adoption and aids in diagnosing the basic barriers to language model adoption.
§.§.§ Diffusion of Innovations Theory (DOI)
While TAM primarily focuses on user perceptions, DOI complements TAM by addressing the technological characteristics influencing adoption. Rogers <cit.> identified key attributes of innovations—relative advantage, compatibility, complexity, trialability, and observability—that significantly affect their adoption rates. As an innovation in software engineering, the acceptance of language models could be shaped by these attributes. For example, the relative advantage of language models over traditional programming methods could be a strong motivator for adoption. The compatibility of language models with existing practices and the complexity of these models might also play crucial roles. Thus, DOI adds depth to our understanding of technology-specific factors influencing language model adoption.
§.§.§ Social Cognitive Theory (SCT)
The decision to adopt new technologies does not occur in a vacuum. It is influenced by the social milieu within which individuals operate. SCT comes into play here by emphasizing the social and environmental factors that influence individual behaviors <cit.>. Software engineering, like any profession, has its own culture, norms, and shared beliefs that could significantly shape the adoption of language models. For instance, the prevailing norm or the extent of peer usage could encourage or discourage language model use. Moreover, the self-efficacy of individuals—shaped in part by their social environment—might affect their willingness to engage with such a new tool. SCT, therefore, adds a social layer to our understanding of language model adoption.
In summary, the triadic theoretical framework of TAM, DOI, and SCT provides a comprehensive lens to examine the adoption of language models in software engineering. By addressing the individual perceptions (TAM), technology characteristics (DOI), and social aspects (SCT), this combined framework provides a well-rounded perspective, ensuring we cover the principal aspects influencing the decision to adopt language models. This choice of theories allows us to glean insightful details that not only offer a rich understanding of the current adoption scenario but also inform strategies to expedite future adoption.
§.§ Interview Guideline
Our investigation covers three units of analysis: individual-level factors, technology-level factors, and social-level factors, inspired by the Technology Acceptance Model (TAM), the Diffusion of Innovations, and Social Cognitive Theory. We aim to reveal a comprehensive understanding of the acceptance and use of LLMs in the software engineering context. Here, we detail the final interview questions designed to capture these constructs effectively.
Individual-level factors, derived from the Technology Acceptance Model (TAM), focus on perceived usefulness, perceived ease of use, and behavioral intention:
* Perceived Usefulness:“To what extent do you think using language models increases your efficiency as a software engineer?” This question is designed to gauge how software engineers perceive the potential productivity gains from using LLMs.
* Perceived Ease of Use: “How easy do you think it is to learn how to use a new language model effectively?” This question aims to capture the perceived cognitive effort required to learn and adapt to LLMs.
* Behavioral Intention: “How likely are you to use a language model in your work in the next six months? And for which tasks?” These questions aim to evaluate the intention of software engineers to adopt LLMs in their near-future tasks.
Technology-level factors, drawing from the Diffusion of Innovations, include compatibility, relative advantage, and complexity:
* Compatibility: “How is using a language model different from your current software engineering practices?” This question assesses the perceived fit of LLMs with existing practices and workflows.
* Relative Advantage: “What potential benefits do you think language models offer over your current methods?” This question helps identify the perceived benefits of LLMs compared to traditional methods.
* Complexity: “What concerns do you have about using a language model in your work?” This question aims to highlight any perceived barriers or challenges associated with LLM adoption.
Social-level factors, grounded in the Social Cognitive Theory, encompass social influence, environmental factors, and self-efficacy:
* Social Influence: “How much do your colleagues or peers influence your decisions to use language models?” This question examines the impact of social norms and colleagues' opinions on the acceptance of LLMs.
* Environmental Factors: “In your opinion, to what extent do you feel your organization is supportive of adopting language models as a standard technology?” This question explores the role of organizational support in fostering LLM adoption.
* Self-Efficacy: “How important is it to you to be seen as someone who uses cutting-edge technology in your work?” This question aims to capture an individual's self-confidence in their ability to use advanced technologies like LLMs effectively.
Through these interview questions, we strive to understand the complex interplay of individual, technological, and social factors that contribute to the adoption and usage of LLMs among software engineers.
§.§ Participants
The data collection was executed via Prolific Academic <cit.>, a well-regarded academic data collection platform often utilized by the software engineering community <cit.>. We solicited the input of 100 software engineers, who were asked to answer a series of nine open-ended questions founded on theoretical principles. The compensation for participants exceeded the US federal minimum wage[We consistently adhered to the suggested compensation by Prolific. For your reference, participants were paid £9.00 per hour for their time.]. The survey was conducted on the Qualtrics platform.
Participants were meticulously chosen through a two-step screening process. Initially, a pre-screening phase was conducted where participants were filtered based on specific self-reported characteristics, including proficiency in computer programming, full-time employment in the software industry, a negative student status, and a 100% approval rate. Following this, a competence screening was performed, as per the methods described by Danilova et al. <cit.>. This second screening involved assessing participants' knowledge and understanding in key areas, including compilers, programming aid websites, and recursive functions. Furthermore, professionals affirmed their familiarity with Generative AI tools and confirmed their use to a certain extent.
Our data collection methodology complied strictly with the ethical guidelines of the Declaration of Helsinki <cit.>. The Research Ethics Committee at Aalborg University approved this research project in March 2023. All participants were older than 18, gave informed consent prior to participating in the study, and were notified of their right to withdraw their participation at any point. Additionally. the author have completed formal training in research ethics for engineering and behavioral sciences.
In terms of participant demographics, men made up 76% of the sample, women accounted for 23%, and non-binary individuals represented 1%. Geographically, the participants came from various regions: Portugal (23%), South Africa (15%), Italy (10%), United Kingdom (9%), Poland (9%), and other countries (34%).
The professional experience of the participants ranged across various stages in the software industry: 26% had 0-1 years of experience, 54% had 2-3 years, 9% had 4-5 years, 9% had 6-10 years, and 2% had over 10 years of experience.
As for their roles in the industry, the majority were software developers or programmers (83%). This was followed by testers or QA engineers (7%), data analysts, data engineers, or data scientists (5%), team leads (3%), and UX/UI designers (2%).
§.§ Analysis of the Qualitative Data
Data analysis was implemented within the naturalistic inquiry paradigm <cit.> context, complemented by the constant comparison method <cit.>. The crucial role these strategies play in qualitative data acquisition and examination is significant. This iterative process facilitates initial theory development by identifying patterns and broader dimensions <cit.>, derived from continual data comparison and analysis, and refining it in accordance with the participants' input <cit.>.
The Thematic Analysis approach was utilized to process the data. Thematic Analysis is a commonly employed method in qualitative research, which involves identifying, analysing, and reporting patterns or themes within the data, while providing a rich, detailed, and complex account of the data <cit.>. The structured methodology proposed by Gioia et al. <cit.> served as the analytical framework. Recent trends within the Management Science community have seen the adoption of this methodology, emphasizing its potential in reinforcing scientific rigour. The approach is structured and dedicated to encouraging comprehensive theoretical progression <cit.>.
The Gioia methodology segments data processing into three stages. The inaugural stage revolves around recognizing first-order concepts, or in-vivo codes <cit.>, which align closely with the participants' own words, with minimal researcher-imposed categorization. These codes were then collated into broader themes, a process known as open coding <cit.>.
In the subsequent stage, similarities and differences are identified, and emergent themes from these comparisons contribute to the explanation and depiction of the phenomena under investigation. We explored the associations between the concepts to create our high-level themes, employing axial coding. These are the second-order themes.
The final stage amalgamates similar second-order themes into aggregate dimensions, representing the apex of theoretical contribution. This process was iterative and process-oriented <cit.>, and was perpetuated until theoretical saturation was accomplished <cit.>.
The outcome of this investigation is the data structure, which encapsulates first-order terms, second-order themes, and aggregate dimensions for each of the nine theoretical dimensions of our investigation. Notably, the aggregate dimensions were not preconceived categories defined prior to the analysis; rather, they are the end product of a refined and iterative analytical process.
§ RESULTS
§.§ Perceived Usefulness of LLMs in Software Engineering
The Technology Acceptance Model (TAM) has been widely used in the study of technology adoption, focusing on two key predictors of acceptance: perceived usefulness and perceived ease of use <cit.>. In the context of software engineering, the perceived usefulness of Large Language Models can be examined by evaluating how they contribute to efficiency, productivity, and performance enhancement. Table <ref> provides a summary of software engineers' perceptions of LLMs in their work.
§.§.§ Efficiency Improvement
One of the main perceived benefits of LLMs is their ability to improve efficiency. As shown in Table <ref>, efficiency improvement is the most frequently mentioned aggregate dimension (55%). Engineers recognize that LLMs can automate certain tasks, reduce time and effort, and simplify monotonous tasks. For example, R-15 highlighted that LLMs can “increase my efficiency by automating certain tasks and reducing the time and effort it takes for manual coding and documentation.” This finding aligns with the TAM's emphasis on perceived usefulness, which posits that users will adopt technology if they perceive it to be useful in enhancing their performance <cit.>.
§.§.§ Task-Specific Benefits
Another aspect of perceived usefulness is the task-specific benefits LLMs provide, such as debugging, learning new features, and generating code snippets. As R-30 mentioned, LLMs have significantly increased their efficiency by helping them “debug code faster, learn about new features without scanning the whole documentation, and providing useful code snippets for work.” This category represents 26% of the aggregate dimensions and supports the notion that perceived usefulness is an important predictor of LLM adoption <cit.>.
§.§.§ Complementary Tool
LLMs are also viewed as a complementary tool to human expertise and judgment. R-17 pointed out that “language models have the potential to enhance productivity and efficiency in software engineering, but they should be used as a tool alongside human expertise and judgment.” This perception highlights the importance of balancing the benefits of LLMs with the need for human oversight, an aspect that may influence the overall perceived usefulness of the technology.
§.§.§ Limited Applicability and Quality Concerns
While many respondents reported positive perceptions of LLMs, some expressed concerns about their limited applicability (15%) and quality concerns (9%). For instance, R-21 mentioned that LLMs are “nice for generic tasks, but the models have zero knowledge about our internal APIs so they're really hard to apply.” R55 also noted that while LLMs may help, “you have to review and understand the code anyway. So I don't know if it makes you more efficient.” These concerns suggest that while LLMs can offer benefits in certain situations, their usefulness may be limited by the need for review and adaptation to specific contexts. This finding is consistent with the TAM literature, which highlights that the perceived usefulness of a technology is not only determined by its benefits but also by its limitations <cit.>.
The perceived usefulness of LLMs in software engineering, as reflected in the efficiency improvement, task-specific benefits, and complementary nature of the technology, supports the potential for widespread adoption. However, the concerns related to limited applicability and quality highlight the importance of addressing these limitations to enhance the perceived usefulness and, consequently, the acceptance of LLMs. This analysis aligns with the TAM framework, which emphasizes that perceived usefulness is a critical determinant of technology acceptance <cit.>.
§.§ Perceived Ease of Use of LLMs in Software Engineering
In this subsection, we present the results of our qualitative analysis of the interview statements, highlighting the key factors that influence the perceived ease of use of Large Language Models in software engineering. Our analysis draws upon the Technology Acceptance Model (TAM) framework <cit.>, which posits that the perceived ease of use and perceived usefulness are essential determinants of technology adoption. We have identified several aggregate dimensions that explain the perceived ease of use of LLMs and present them in separate subsubsections, providing empirical evidence from the interview statements (Table <ref>).
§.§.§ Learning Process
Our analysis reveals that the learning process is a crucial factor influencing the perceived ease of use of LLMs in software engineering. As shown in Table <ref>, R-54 reported that “once I got familiar with the technology, it became much easier to use.” This finding aligns with the Technology Acceptance Model (TAM) proposed by <cit.>, which suggests that the perceived ease of use of a technology is directly related to its adoption. Moreover, prior research has emphasized the role of learning in the adoption of new technologies <cit.>. In this context, the learning curve associated with LLMs appears to be an essential determinant of their perceived ease of use.
§.§.§ Prior Experience
The interview data also underscore the importance of prior experience in shaping the perceived ease of use of LLMs. For example, R-20 stated that “I think it is easy when you know the different concepts.” This observation is consistent with the literature on technology adoption, which suggests that individuals with prior experience in related technologies are more likely to perceive a new technology as easy to use <cit.>. In the case of LLMs, having a background in programming languages or natural language processing (NLP) could facilitate their adoption in software engineering.
§.§.§ Individual Differences
Another key theme that emerged from our analysis is the role of individual differences in shaping the perceived ease of use of LLMs. As R-9 noted, “it depends on the person and how they are used to work.” This finding supports the notion that individual characteristics, such as cognitive style and personal innovativeness, can influence the perceived ease of use of a technology <cit.>. In the context of LLMs, the extent to which software engineers perceive them as easy to use may depend on their unique preferences, learning styles, and problem-solving approaches.
§.§.§ Intuitiveness and User Interface
The intuitiveness of LLMs and their user interface design also emerged as important factors in our analysis. For instance, R-89 mentioned that “they were pretty much made for ease of use by the average consumer.” This observation aligns with the work of <cit.>, who argued that a well-designed user interface can significantly enhance the perceived ease of use of a technology. In the case of LLMs, an intuitive and user-friendly interface could facilitate their adoption among software engineers.
§.§.§ Task Complexity
Finally, the complexity of the tasks that LLMs are used for in software engineering appears to influence their perceived ease of use. As R-53 noted, “the difficulty to learn how to use them effectively can vary as it depends on how you're using it, but for the most part, it ranges from not too hard to very hard.” This finding is consistent with the Task-Technology Fit model <cit.>, which posits that thefit between the technology and the task it is intended for affects the technology's perceived ease of use and, ultimately, its adoption. In the context of LLMs, it seems that software engineers may find them easier to use for certain tasks, while others might require a higher level of expertise and knowledge.
In summary, our analysis identified several aggregate dimensions that explain the perceived ease of use of LLMs in software engineering, including the learning process, prior experience, individual differences, intuitiveness and user interface, and task complexity. These factors provide a nuanced understanding of the adoption of LLMs in software engineering and their connection to the theoretical framework of the Technology Acceptance Model. By incorporating the empirical evidence from the interview statements and drawing on relevant literature, our findings contribute to the ongoing conversation about the role of LLMs in software engineering and the factors that influence their adoption.
§.§ Behavioral Intention of LLMs in Software Engineering
The Technology Acceptance Model (TAM) has been widely used to understand the factors influencing the adoption of new technologies in various contexts, such as software engineering <cit.>. According to the TAM, behavioral intention, which reflects the likelihood of an individual to use a specific technology, is influenced by two main factors: perceived usefulness and perceived ease of use <cit.>. In this subsection, we explore the behavioral intention of software engineers in relation to the adoption of Large Language Models, focusing on the aggregate dimensions emerged from the performed analysis (Table <ref>).
§.§.§ Code Improvement and Maintenance
A significant portion of software engineers indicated their intention to use LLMs to improve and maintain their codebase. This finding aligns with the TAM's concept of perceived usefulness, as using LLMs for code refactoring, adherence to design patterns, and implementation of SOLID principles can enhance software quality and maintainability <cit.>. For instance, R-8 mentioned, “I am considering purchasing a ChatGPT-4 subscription, mainly to refactor (legacy) code or to make it adhere to certain design patterns. It could also help refactoring code to make it more SOLID.” This quote highlights the potential value of LLMs in addressing common software engineering challenges.
§.§.§ Efficiency and Automation
LLMs were perceived to be useful for automating repetitive tasks and increasing efficiency. This perception corresponds to both perceived usefulness and perceived ease of use in the TAM, as task automation can lead to time savings and streamline the development process <cit.>. R-35 stated, “I believe I may start using a language model more and more, especially to automate tasks which can be performed by a language model and which take a significant amount of time.” The adoption of LLMs for task automation can potentially improve software engineers' productivity.
§.§.§ Learning and Problem Solving
The use of LLMs for learning and problem solving was another theme identified, which is in line with the TAM's perceived usefulness. Software engineers expressed the intention to use LLMs for tasks such as finding documentation, clarifying confusing code, and seeking information on programming-related questions. As R-25 stated, “Very likely. Mostly to find documentation for libraries, refactor and clarify confusing code.” LLMs can serve as a valuable learning and problem-solving tool for software engineers, supporting continuous professional development.
§.§.§ Specialized Applications
Respondents mentioned the potential use of LLMs for specific tasks, such as writing basic functionalities, defining tasks, and composing emails. This theme is related to the perceived usefulness of LLMs in addressing particular software engineering needs. R-62, for example, mentioned, “Very likely. For writing basic functionalities, defining tasks, for emails.” The adoption of LLMs for specialized applications can provide targeted benefits to software engineers in their daily work.
§.§.§ Adoption Barriers and Concerns
Despite the potential benefits of LLMs, some software engineers expressed concerns and barriers to adoption, such as cost, dependency on third-party services, and potential ethical issues. These concerns align with the TAM's concept of perceived ease of use, as they can hinder the adoption of LLMs <cit.>. For instance, R-60 stated, “The cost and dependency on a third-party service might be a concern.” Understanding and addressing these concerns is essential for promoting LLM adoption in software engineering.
In conclusion, our analysis reveals several factors that influence the behavioral intention of software engineers to adopt LLMs, aligning with the theoretical framework of the TAM. LLMs are perceived as useful for code improvement and maintenance, efficiency and automation, learning and problem-solving, and specialized applications, while some adoption barriers and concerns persist. By understanding these factors, we can better support the integration of LLMs into software engineering practices and promote their adoption to enhance productivity and software quality.
§.§ Compatibility of LLMs in Software Engineering
The compatibility of Large Language Models in software engineering is a crucial factor in understanding their adoption and impact on software development practices. Compatibility refers to the degree to which an innovation is perceived as being consistent with existing values, past experiences, and the needs of potential adopters <cit.>. This subsection presents a thematic analysis of the compatibility of LLMs in software engineering based on the responses of software engineers, which are summarized in Table <ref>. We have identified four aggregate dimensions, as detailed in the following sub-subsections: Improved Efficiency, Assistance and Support, Similarity to Current Practices, and Adaptation and Learning.
§.§.§ Improved Efficiency
Improved Efficiency was the most frequently occurring theme in the data, with 39% of the responses reflecting this aspect of compatibility. The use of LLMs in software engineering tasks is perceived to speed up the development process, automate mundane tasks, and ultimately improve overall efficiency. One respondent (R-19) highlighted that LLMs “can be used to automate certain tasks in software engineering,” thus reducing the time and effort spent on repetitive tasks. Similarly, R-35 noted that using a language model “will speed up the tasks of going to look for specific pieces of code from multiple websites.” These findings align with the literature, where compatibility is a key factor for technology adoption <cit.>.
§.§.§ Assistance and Support
Assistance and Support emerged as the second most frequent theme in the data, representing 28% of the responses. Respondents highlighted the value of LLMs in providing help and support, particularly in situations where traditional search methods fail to deliver desired results. R-1 mentioned that LLMs can provide an “extra hand and assistance for things I don't know and can't find with a traditional search.” This demonstrates that LLMs' ability to offer contextually relevant and targeted assistance is seen as an important aspect of their compatibility with software engineering practices.
§.§.§ Similarity to Current Practices
Similarity to Current Practices was reported by 16% of the respondents. This theme suggests that the adoption of LLMs in software engineering is facilitated by their perceived similarities to existing tools and practices. R-15 noted that there is “not much of a difference as the language model is just used to assist my current software engineering practices.” R-5 similarly stated that using LLMs is “like Googling but I don't need to filter as much information.” The perceived similarity to current practices can influence the adoption of LLMs as it reduces the barriers to their integration into existing workflows <cit.>.
§.§.§ Adaptation and Learning
Adaptation and Learning was reported by 11% of the respondents. This theme highlights the importance of learning and adapting to new tools and techniques in the software engineering domain. R-46 expressed that using LLMs is “something new: you have to learn how to use it.” Similarly, R-87 mentioned that using a language model “represents a new paradigm for me.” This theme indicates that the compatibility of LLMs in software engineering can be enhanced by promoting learning and adaptation among software engineers. The adoption of LLMs can thus be facilitated by providing resources and training to help engineers understand and integrate these tools into their daily work.
In conclusion, this subsection has explored the compatibility of LLMs in software engineering by analyzing the aggregate dimensions derived from the responses of software engineers. These dimensions—Improved Efficiency, Assistance and Support, Similarity to Current Practices, and Adaptation and Learning—provide a comprehensive understanding of the factors that influence the compatibility of LLMs in software engineering. By linking these dimensions to the Diffusion of Innovation theory <cit.>, this analysis offers valuable insights into the factors that contribute to the adoption and integration of LLMs into software engineering practices. This understanding can help inform the development of LLMs that are more compatible with existing workflows and practices, ultimately leading to more widespread adoption and use in the software engineering domain.
§.§ Complexity of LLMs in Software Engineering
The complexity of adopting Large Language Models in software engineering is a critical aspect of understanding their diffusion and impact on the industry. According to the Diffusion of Innovation theory, the complexity of an innovation influences its adoption rate, with more complex innovations facing a slower adoption <cit.>. The thematic analysis of the interview data (Table <ref>) reveals several aggregate dimensions that contribute to the perceived complexity of LLMs in software engineering. In this subsection, we discuss each of these dimensions and their implications for the adoption of LLMs, linking them to the relevant literature and the Diffusion of Innovation theory.
§.§.§ Job Security Concerns
The fear of job loss and skill devaluation emerged as a significant concern among respondents (25% frequency). Respondent R-15 stated, “I'm concerned that it can automate a lot of tasks and make most of my work obsolete.” This perspective aligns with research on the potential disruptive effects of AI and automation on the job market <cit.>. According to the Diffusion of Innovation theory, innovations that are perceived to threaten job security are likely to face resistance <cit.>. To mitigate this challenge, organizations should communicate the benefits of LLMs and focus on upskilling and reskilling employees <cit.>.
§.§.§ Dependence and Complacency
Some respondents (16% frequency) expressed concerns about junior programmers relying too much on LLMs, leading to a decline in code understanding and increased reliance on these models. Respondent R-34 explained, “My concern is other junior programmers using it without understanding the code and causing bugs (more work for me).” This challenge can be addressed by promoting the responsible use of LLMs and ensuring that programmers have a strong foundation in coding concepts.
§.§.§ Data Security and Privacy
Data security and privacy concerns were identified by 15% of respondents. They expressed concerns about LLMs being trained on sensitive data, potentially leading to privacy breaches. Respondent R-17 mentioned, “In terms of privacy, as language models can be trained on sensitive or personal data, such as emails, messages, or documents. This may raise privacy and data protection concerns.” To address this issue, developers should ensure that LLMs are trained on secure, anonymized datasets and that privacy regulations are followed.
§.§.§ Quality and Accuracy of Generated Code
Respondents also raised concerns about the quality and accuracy of code generated by LLMs (13% frequency). Respondent R-49 remarked, “Language models aren't perfect, so I would be afraid that they would cause errors.” Ensuring the reliability and accuracy of generated code is essential for LLM adoption <cit.>. To address this issue, developers should establish best practices for code review and validation, as well as invest in improving the models' performance <cit.>.
§.§.§ Ethical and Legal Considerations
Ethical and legal considerations, such as authorship and intellectual property rights, were identified by 8% of respondents. Respondent R-28 simply stated, “Author rights are tricky to attribute.” Organizations should consider the ethical implications of using LLMs and establish guidelines for their use to ensure compliance with existing laws and regulations <cit.>.
§.§.§ Bias and Explainability
Some respondents (7% frequency) highlighted concerns about the potential biases in outputs and the lack of explainability in their decision-making processes. Respondent R-62 expressed, “The biases in the model can have unintended consequences.” It is essential to address these issues to ensure the responsible use of LLMs in software engineering. Organizations can invest in research to reduce biases and improve the explainability of LLMs to enhance their trustworthiness and adoption <cit.>.
§.§.§ Integration and Compatibility
Integration and compatibility issues were mentioned by 6% of respondents, who expressed concerns about the ability of LLMs to work seamlessly with existing software development tools and practices. Respondent R-21 stated, “Integrating the model into the current workflow might be challenging.” To facilitate the adoption of LLMs, developers should ensure that these models are compatible with existing tools and can be easily integrated into the software development process.
In conclusion, the complexity of LLM adoption in software engineering is multifaceted, encompassing various concerns, such as job security, dependence, data security, code quality, ethical issues, bias, and integration challenges. By addressing these concerns, organizations can facilitate the adoption of LLMs and leverage their potential benefits in software engineering. This discussion, grounded in the thematic analysis of the interview data and the Diffusion of Innovation theory, contributes to the understanding of the factors that affect LLM adoption in the software engineering context.
§.§ Relative Advantage of LLMs in Software Engineering
The relative advantage of Large Language Models in software engineering is a key factor in their adoption, as postulated by the Diffusion of Innovation theory <cit.>. Our qualitative data analysis, based on Gioia's Methodology, highlights the various aspects of LLMs that contribute to their perceived benefits over current methods. The following sub-subsections present the Aggregate Dimensions derived from our analysis and discuss their implications in relation to the relative advantage construct (Table <ref>).
§.§.§ Time Efficiency
A recurring theme in our analysis was the time efficiency provided by LLMs, as reported by 42% of respondents. Respondents appreciated the ability of LLMs to quickly complete tasks, search for relevant information, and provide solutions to coding issues. For example, R-25 noted that LLMs significantly reduced time spent searching for information: “I believe the best thing is the time spent searching for certain things is way lower than before.” This time efficiency can be attributed to the natural language processing capabilities of LLMs, which enable users to communicate their needs more effectively and rapidly obtain tailored solutions <cit.>.
§.§.§ Code Quality
Improvements in code quality emerged as another key advantage of LLMs, with 14% of respondents highlighting the positive impact on their work. Respondents reported that LLMs provided clearer, more understandable code, which reduced errors and improved overall code robustness. R-37 stated: “It generates a simpler and more understandable code, working in a more organized way and reducing errors.” This improvement in code quality is a result of LLMs' ability to analyze and learn from vast amounts of source code, allowing them to provide optimal solutions based on best practices <cit.>.
§.§.§ User Experience
LLMs were noted to enhance the overall user experience, with 11% of respondents mentioning the ease of use and communication with the models. R-67 commented on the human-like interaction: “Language models are easier to use and faster because you can send messages like for human.” The improved user experience can be attributed to the natural language understanding capabilities of LLMs, which allow them to interpret and respond to user inputs more effectively than traditional methods <cit.>.
§.§.§ Learning and Skill Development
LLMs were also found to facilitate learning and skill development, as reported by 9% of respondents. Respondents appreciated the ability of LLMs to simplify the learning process and reduce the time spent on mastering technical concepts. R-23 explained: “I think with language models you don't have to spend that much time learning `technical' things.” This can be linked to the contextual understanding and knowledge retention capabilities of LLMs, which allow them to provide tailored guidance and support for users with varying levels of expertise.
§.§.§ Customization and Personalization
Finally, customization and personalization were highlighted as advantages by 8% of respondents. LLMs were praised for their ability to provide more digestible information and adapt responses to user preferences. R-94 described the flexibility of LLMs: “It gives more digested information, and we can 'mold' the information how we want (e.g., asking the language model to respond using short sentences, or to explain in detail certain topics, etc).” This aspect of LLMs can be attributed to their capacity for understanding context and user preferences, allowing them to generate more relevant and personalized responses <cit.>.
In summary, our thematic analysis revealed several key aspects of LLMs that contribute to their perceived relative advantage in software engineering, including time efficiency, code quality, user experience, learning and skill development, and customization and personalization. These findings align with the Diffusion of Innovation theory, suggesting that the adoption of LLMs in software engineering can be facilitated by their ability to provide clear benefits over existing methods <cit.>. Moreover, our results highlight the potential of LLMs to revolutionize the field of software engineering by streamlining processes, enhancing user experience, and fostering continuous learning and improvement.
§.§ Social Influence of LLMs in Software Engineering
The adoption of Large Language Models in software engineering is influenced by various factors. One key factor is the social influence from peers and colleagues, as suggested by the Social Cognitive Theory <cit.>. The current analysis aims to provide a rationale on how the data explains the “social influence” in relation to the adoption of LLMs in software engineering and how it links to the theoretical framework of the Social Cognitive Theory. Our analysis (Table <ref>) revealed four aggregate dimensions representing the range of social influences in the adoption of LLMs: No Influence, Low Influence, Moderate Influence, and High Influence.
§.§.§ No Influence
Our analysis revealed that 29% of the respondents reported no influence from their colleagues or peers on their decision to use LLMs (e.g., R-66, R-24, R-58). This finding suggests that a considerable proportion of software engineers make independent decisions about whether to adopt LLMs. This aligns with the literature on individual agency and self-efficacy in the Social Cognitive Theory <cit.>. These software engineers may rely on their own evaluation of the technology and personal preferences, rather than the opinions or experiences of their colleagues.
§.§.§ Low Influence
Low influence was reported by 24% of the respondents (e.g., R-45, R-99). This indicates that some software engineers may be slightly influenced by their peers, but ultimately retain a high degree of autonomy in their decision-making. This finding suggests that while social influence may play a role in the adoption of LLMs, individual factors, such as personal interest and perceived utility, may also significantly contribute to the decision-making process <cit.>.
§.§.§ Moderate Influence
Our analysis revealed that 21% of the respondents reported moderate influence from their peers and colleagues (e.g., R-9, R-82). This suggests that a significant proportion of software engineers value the input and feedback of their peers when deciding whether to adopt LLMs. This finding is consistent with the Social Cognitive Theory, which emphasizes the role of observational learning and vicarious experiences in shaping individual behavior <cit.>. In the context of LLM adoption, this moderate influence may result from a combination of individual factors and the experiences of colleagues.
§.§.§ High Influence
Finally, 26% of the respondents reported high influence from their peers and colleagues (e.g., R-97, R-79, R-27). This finding suggests that a considerable proportion of software engineers are strongly influenced by the collective use and enthusiasm for LLMs within their professional circles. This result aligns with the Social Cognitive Theory's focus on the reciprocal interactions between individuals and their social environment, as well as the literature on technology adoption in organizations <cit.>. In this case, the high level of influence may stem from the perceived benefits of LLMs, collaboration, and shared enthusiasm for exploring the technology's possibilities.
In summary, our analysis revealed a diverse range of social influence levels in the adoption of LLMs in software engineering. While some software engineers reported no influence from their colleagues or peers, others indicated low, moderate, or high levels of influence. These findings highlight the complex interplay between individual factors, such as self-efficacy and personal interest, and social influences, as posited by the Social Cognitive Theory. The understanding of these various levels of social influence can inform future research on the adoption of LLMs and other emerging technologies in software engineering, as well as organizational strategies for encouraging their appropriate use.
§.§ Self-efficacy of LLMs in Software Engineering
Self-efficacy, an integral construct within the Social Cognitive Theory, refers to an individual's belief in their ability to perform specific tasks and achieve desired outcomes <cit.>. In the context of software engineering, the self-efficacy of developers using Large Language Models can impact their adoption and effective utilization. Based on our thematic analysis, we identified three aggregate dimensions that contribute to understanding self-efficacy in relation to LLMs adoption: (1) Importance of being seen as cutting-edge, (2) Focus on practicality and efficiency, and (3) Low importance of being seen as cutting-edge. These dimensions will be discussed in detail in the following subsubsections, highlighting their role in shaping developers' self-efficacy and adoption behavior of LLMs in software engineering. The data structure can be found in Table <ref>.
§.§.§ Importance of Being Seen as Cutting-edge
As shown in the Table, 57% of respondents emphasized the importance of being seen as someone who uses cutting-edge technology in their work. This perception aligns with the notion of self-efficacy, as developers who consider themselves proficient in the latest technologies tend to have higher confidence in their capabilities <cit.>. For example, R-2 expressed that being up-to-date with cutting-edge technology is crucial in their line of work, and R-30 mentioned the fear of being replaced by someone perceived as more proficient in cutting-edge technology. This focus on staying current with technological advancements may encourage developers to adopt LLMs to showcase their expertise and maintain a competitive edge in the industry <cit.>.
§.§.§ Focus on Practicality and Efficiency
Our analysis revealed that 35% of respondents highlighted the importance of practicality and efficiency in their work, demonstrating a preference for using technologies that effectively solve problems or meet client needs, rather than simply being cutting-edge. This focus reflects a more task-oriented self-efficacy, where developers concentrate on finding the most appropriate tools for the job. R-10, for instance, emphasized the importance of staying ahead of the curve and delivering innovative solutions to meet evolving customer needs, while R-19 and R-100 mentioned the balance between adopting cutting-edge technologies and ensuring efficiency in their work. This suggests that developers with a focus on practicality and efficiency will adopt LLMs only when they perceive these models to provide tangible benefits to their work.
§.§.§ Low Importance of Being Seen as Cutting-edge
A smaller group of respondents (8%) assigned low importance to being seen as using cutting-edge technology in their work. These developers prioritize stability, effectiveness, or other factors over adopting the latest technologies, which may influence their self-efficacy in terms of LLM adoption <cit.>. For instance, R-21 expressed a preference for stability over implementing bleeding-edge technology, and R-66 stated that their focus is on using the most effective tools for the task at hand, regardless of whether they are cutting-edge. This finding suggests that developers with a low emphasis on cutting-edge technology may be less inclined to adopt LLMs unless they demonstrate clear benefits over existing tools.
In conclusion, our thematic analysis of self-efficacy in relation to LLM adoption in software engineering has identified three aggregate dimensions that provide valuable insights into developers' perceptions and behaviors. These dimensions suggest that the importance assigned to cutting-edge technology, along with the focus on practicality and efficiency, plays a critical role in shaping developers' self-efficacy and their likelihood of adopting LLMs.
§.§ Environmental Factors of LLMs in Software Engineering
The adoption of Large Language Models in software engineering is influenced by various environmental factors that shape organizational behaviors and decision-making processes. Drawing on the Social Cognitive Theory (SCT) framework <cit.>, this study investigates how environmental factors affect the extent to which organizations are supportive of adopting LLMs as a standard technology. The following sections present the findings of our thematic analysis, which revealed five aggregate dimensions related to environmental factors: Supportive Attitude, Neutral Stance, Conditional Support, Limited Support, and Lack of Support. Each dimension is discussed in detail with references to the data presented in Table <ref>.
§.§.§ Supportive Attitude
The most prevalent aggregate dimension in our data, Supportive Attitude, captures the positive and proactive stance of organizations in promoting LLM adoption (42% frequency). This dimension encompasses strong organizational encouragement and investment in the technology to facilitate its integration into software engineering practices <cit.>. Respondent R-5, for instance, highlights the active promotion of LLMs by their organization, stating that they “pay for it and encourage us to use it.” Similarly, R-45 reports a “very supportive” attitude, illustrating the extent to which some organizations prioritize the adoption of LLMs.
§.§.§ Neutral Stance
The Neutral Stance dimension reflects organizations that neither actively support nor oppose the adoption of LLMs (20% frequency). This stance may be attributed to the lack of awareness or knowledge about LLMs, or a wait-and-see approach to gauge the potential benefits and drawbacks of the technology <cit.>. Respondent R-84 describes their organization's position as “neutral, up to the employee,” implying that the decision to use LLMs is left to individual discretion rather than being guided by organizational policy.
§.§.§ Conditional Support
Some organizations in our sample exhibit a Conditional Support dimension (19% frequency), characterized by their willingness to adopt LLMs provided certain criteria are met, such as the technology demonstrating clear benefits or aligning with specific organizational objectives <cit.>. In this context, R-26 notes that their organization supports LLM adoption “if it brings more advantages to the team and the way we work, and make things faster.”
§.§.§ Limited Support
The Limited Support dimension (12% frequency) represents organizations that only support the use of LLMs in specific contexts or for certain tasks. This selective approach may stem from concerns related to security, privacy, or ethical considerations. For example, R-64 explains that their organization “opposes them when working on new features but for debugging, they can be quite helpful.”
§.§.§ Lack of Support
Finally, the Lack of Support dimension (15% frequency) captures organizations that actively oppose or discourage the use of LLMs in software engineering. This stance may be driven by various factors, such as ethical concerns, fear of job displacement, or skepticism about the technology's effectiveness. Respondent R-74 reveals that their organization offers limited support for LLMs, mainly due to “copyright and security concerns about proprietary intellectual property.”
In summary, our thematic analysis of environmental factors affecting LLM adoption in software engineering has identified five aggregate dimensions, ranging from strong support to active opposition. These dimensions provide valuable insights into the diverse organizational attitudes and contexts that shape the integration of LLMs into software engineering practices, contributing to a deeper understanding of the role of environmental factors in the SCT framework.
§.§ Key Insights of the Qualitative Study
Our findings demonstrate the potential benefits of LLMs in software engineering, highlighting their impact on automating repetitive tasks, enhancing problem-solving abilities, facilitating learning and understanding, improving code quality, assisting in debugging and optimization, and increasing overall efficiency. However, concerns about the limitations of LLMs, such as their lack of knowledge about internal APIs and the need for human oversight, emphasize the importance of human expertise in utilizing these tools effectively.
The perceived ease of use of LLMs in software engineering is influenced by factors such as integration with existing tools and workflows, accessibility and comprehensibility of documentation, customizability and adaptability, and support from the developer community. These factors are crucial in facilitating the adoption of LLMs and their seamless integration into the software engineering domain.
The behavioral intention to adopt LLMs in software engineering is shaped by the perceived usefulness, perceived ease of use, social influence, and facilitating conditions. These factors collectively contribute to creating an environment conducive to the successful integration of LLMs into software engineering practices.
Regarding the Diffusion of Innovation theory, the compatibility of LLMs in software engineering is influenced by dimensions such as improved efficiency, assistance and support, similarity to current practices, and adaptation and learning. However, concerns about dependency and skill degradation, privacy, security and data protection, job displacement, and labor market implications, and accuracy, reliability, and explainability underline the complexity of LLM adoption in this domain.
Our findings also reveal the relative advantage of LLMs over traditional methods in software engineering. Factors such as time efficiency, code quality, user experience, learning and skill development, and customization and personalization contribute to the perceived benefits of LLMs in this domain.
From a Social Cognitive Theory perspective, the influence of peers and colleagues on LLM adoption in software engineering varies from no influence to high influence. Self-efficacy is influenced by factors such as the importance of being seen as cutting-edge, a focus on practicality and efficiency, and the low importance of being seen as cutting-edge. Environmental factors, such as supportive organizational culture, uncertainty, security concerns, neutral organizational culture, resistance to change, and limited or marginal support, also play a role in LLM adoption.
In conclusion, the adoption of LLMs in software engineering is a multifaceted phenomenon influenced by a variety of factors and theoretical perspectives. Our study provides valuable insights into the key dimensions and concerns that shape the integration of LLMs in the software engineering domain, paving the way for future research and practice in this area.
§ THE HUMAN-AI COLLABORATION AND ADAPTATION FRAMEWORK (HACAF)
In this section, we introduce the Human-AI Collaboration and Adaptation Framework (HACAF), an innovative theoretical model designed to understand and predict the adoption of Generative AI tools in software engineering. The HACAF derives its components from several established theories including the Technology Acceptance Model (TAM), Diffusion of Innovations Theory (DOI), Social Cognitive Theory (SCT), the Unified Theory of Acceptance and Use of Technology (UTAUT), and the personal innovativeness construct.
While the original TAM, DOI, and SCT theories provide robust theoretical foundations for understanding technology acceptance and adoption, our qualitative investigation indicated the necessity for a more nuanced model. The HACAF is, therefore, not merely an amalgamation of these theories, but also an evolution, as it incorporates additional facets revealed in our research.
The inclusion of constructs from UTAUT addresses the need for a greater focus on social influence and facilitating conditions, elements that emerged as significantly influential in our qualitative findings. The additional integration of the personal innovativeness construct into HACAF is motivated by the observed variability in adoption behaviors among software engineers, even within the same contextual environment, implying a role for individual differences in innovative tendencies.
By merging these four main components into the HACAF, we not only leverage the collective strengths of these prominent theories but also account for the additional complexities of technology adoption that surfaced in our qualitative investigation. Consequently, the HACAF represents a tailored approach that reflects the multifaceted nature of LLM adoption in software engineering. This comprehensive framework aims to provide a deeper understanding of the complex dynamics involved in the adoption of LLMs and act as a guide for future research and practice in this rapidly evolving domain.
Perceptions about the technology is a cornerstone of HACAF, rooted in TAM. This construct denotes a software engineer's evaluation of the usefulness, ease of use, and relative advantage of LLMs. The qualitative data substantiates this construct by revealing that software engineers assess LLMs based on their potential to streamline coding processes, enhance code quality, and expedite project timelines. Furthermore, the focus on practicality and efficiency found in our study underscores the importance of this construct in influencing the adoption of LLMs.
Compatibility factors, informed by the Diffusion of Innovation theory, illustrate the degree to which LLMs align with the existing values, experiences, and needs of potential adopters. Our qualitative investigation underpins this construct by highlighting the importance of the fit of LLMs within current software development workflows. Developers who perceive a high degree of fit, irrespective of the technology's cutting-edge nature, are more likely to adopt LLMs, reinforcing the relevance of this construct.
Social factors, drawing on UTAUT and the concept of computer self-efficacy, emphasize the influence of the social environment and an individual's belief in their abilities to use LLMs. The investigation's findings lend weight to this construct by indicating that the importance of being seen as cutting-edge and the developer's self-efficacy could drive LLM adoption. The role of peer approval and the belief in one's competence to master LLMs surfaced as significant influencers, further establishing this construct's salience in the HACAF model.
Personal and environmental factors bring into play the role of personal innovativeness and organizational support. Personal innovativeness represents an individual's predisposition towards new technologies, and organizational support captures the perceived facilitating conditions within an organization for the use of LLMs. Both elements emerged as significant in our investigation. Our data evidenced how individual readiness to experiment with LLMs and the organization's stance towards LLMs, ranging from active support to outright opposition, directly influence the likelihood of LLM adoption.
In conclusion, the HACAF model, informed and justified by our qualitative investigation and underpinned by established theoretical constructs, offers a nuanced understanding of the interplay of individual and organizational factors influencing the adoption of LLMs in software engineering. By grounding this framework in both empirical evidence and theory, we aim to provide a solid foundation for further empirical scrutiny and contribute to the understanding of AI technology adoption dynamics.
§.§ Theoretical Model and Hypotheses
The previous section's theoretical foundation, bolstered by the qualitative investigation, serves as the basis for operationalizing the Human-AI Collaboration and Adaptation Framework (HACAF). We now translate these theoretical constructs into four main determinants of the intention to use LLMs: perceptions about the technology, compatibility factors, social factors, and personal and environmental factors, and formulate hypotheses based on these constructs, graphically represented in Figure <ref>.
Perceptions about the technology encapsulate the perceived usefulness, ease of use, and relative advantage of LLMs. As our qualitative data revealed, LLMs' perceived usefulness and relative advantage, such as streamlining coding processes and enhancing code quality, have a significant bearing on their adoption. The ease of use was another critical factor, as intuitive and user-friendly LLMs are more likely to be adopted by software engineers. Thus, drawing upon the Technology Acceptance Model (TAM), we propose:
H1: Positive perceptions about the technology (PT), encompassing perceived usefulness, ease of use, and relative advantage, will increase the Intention to Use (IU) LLMs in a software engineering context.
Compatibility factors address the extent to which LLMs align with the existing values, experiences, and needs of potential adopters. As the qualitative investigation underlines, LLMs' compatibility with current software development workflows significantly influences adoption decisions. Software engineers are more likely to adopt LLMs when they perceive a high degree of fit with their work practices. Following this and the Diffusion of Innovation theory, we hypothesize:
H2: Positive perceptions about the technology (PT) will enhance the Compatibility Factors (CF).
H3: Enhanced compatibility factors (CF) will in turn increase the Intention to Use (IU) LLMs.
Social factors, which include social influence and self-efficacy, play an important role in adoption decisions. Our qualitative study emphasized that peer approval and an individual's belief in their ability to use LLMs could significantly influence the intention to use these technologies. Based on this and the Unified Theory of Acceptance and Use of Technology (UTAUT) and the concept of computer self-efficacy, we posit:
H4: Positive perceptions about the technology (PT) will increase Social Factors (SF).
H5: Increased social factors (SF) will enhance the Intention to Use (IU) LLMs.
Finally, Personal and environmental factors, including personal innovativeness and organizational support, contribute to the complexity of the model. As underscored by our investigation, an individual's willingness to experiment with LLMs and the perceived supportiveness of the organization can be decisive for LLM adoption. Thus, we hypothesize:
H6: Personal and Environmental Factors (PEF), specifically personal innovativeness and organizational support, will moderate the relationship between Perceptions about the technology and IU LLMs, strengthening the positive effect of Perceptions about the technology on Intention to Use LLMs.
§ THEORY VALIDATION
In the next part of this study, we performed an empirical verification of our research hypotheses. A sample study was conducted, identified as the most appropriate method for extending a research model. The purpose was to corroborate our qualitative conclusions via a quantitative examination.
§.§ Partial Least Squares – Structural Equation Modeling
Partial Least Squares – Structural Equation Modeling (PLS-SEM) is a multifaceted statistical examination designed to substantiate latent and unseen variables (or constructs) through multiple observable indicators. This method is particularly useful for theory development studies and is increasingly adopted in empirical software engineering <cit.>. PLS-SEM can address multiple interconnected research queries in one extensive analysis, making it a popular choice across other fields such as Management, Information Systems Research, and Organizational Behavior. As Gefen et al. suggest, SEM is commonly employed to authenticate tools and verify connections between constructs <cit.>. The subsequent evaluation and analysis of the PLS-SEM model adhere to the latest guidelines and recommendations for research in software engineering by Russo & Stol <cit.>.
§.§.§ Scale Development
The survey was crafted with the assistance of supplementary theory. We structured our survey by adapting instruments from previous research. All items utilized to define each construct and the references used to shape the questions are summarized in Table <ref>. Each construct was assessed through uni-dimensional items on a 7-point Likert scale indicating levels of agreement.
Initially, a pre-test was conducted with three potential respondents (software professionals) to assess the survey's usability, reasoning, and phrasing. The usability received positive feedback, and some minor issues with the reasoning and phrasing were identified and subsequently addressed.
§.§.§ Survey Data Collection
The minimum sample size was determined by conducting an a priori power analysis with G*Power. With an effect size of 15%, significance level at 5%, and a power of 95%, the smallest size required for seven predictors is 153.
A cluster sampling strategy was utilized for data collection, facilitated through the academic data collection platform, Prolific, which boasts more than 120,000 active users. Compared to options like mailing lists, Prolific offers superior reliability, replicability, and data quality, making it a preferred choice in the field of computer science. The survey was delivered via Qualtrics, randomizing the order of questions within their blocks to reduce response bias.
A multi-phase screening process was implemented to ensure the integrity of the collected data. Data collection was carried out between April and June 2023.
Pre-screening. In the initial phase, participants were selected based on self-reported characteristics, including proficiency in Computer Programming, working in the Software industry, Full-time, having a negative Students status, and an Approval rate of 100%. The latter signifies that we only included participants previously rated as reliable and high-quality by other researchers using the platform. A total of 831 potential participants were included.
Competence Screening. To ensure the accuracy of our pool, participants were asked to complete a questionnaire containing three competency-based questions about software design and programming.
Additionally, they conveyed their familiarity with, and usage of, Generative AI tools, at least to a certain extent.
This helped confirm the reliability of their self-reported skills. This process resulted in a reduced pool of 606 participants.
Quality & Competence Screening.
To ensure the accuracy of our pool, we added one single-item screening question in our survey from Danilova et al. <cit.>, asking “Which of these websites do you most frequently use as aid when programming?” with `Stack Overflow' as the correct question.
Additionally, we added three random attention checks to further ensure data quality.
A total of 220 completed questionnaires were received, but 36 were discarded due to failure on at least one attention check. This left us with 184 valid and complete responses, surpassing the minimum sample size.
§.§.§ Participant Demographics
Our survey encompassed a diverse set of 184 respondents, comprising 80% males, 18% females, 1% non-binary individuals, and 1% who preferred not to disclose their gender. The respondents were drawn from a broad geographic pool spanning 27 unique countries, with the most populous responses originating from the UK (24%), South Africa (13%), Poland (11%), Germany (11%), and the United States of America (7%).
In terms of work tenure, the median experience among participants was three years. The majority, 125 respondents, were relatively early in their careers, with 1 to 5 years of experience. Forty participants reported a more substantial work experience ranging between 6 to 15 years. An additional 14 participants had an extensive work experience of 16 to 30 years, and a handful of respondents, 5 in total, possessed more than 30 years of experience.
Our sample prominently featured individuals from the software development sector, making up 66% of all respondents. Additionally, 12% of respondents held data analysis, engineering, or science roles. A smaller segment, 8%, held leadership roles such as Team Leads or CIOs. The remaining respondents included Tester / QA Engineers (6%), DevOps/Infrastructure Engineers (3%), Architects (2%), UX/UI Designers (2%), and other roles (2%).
§.§ Evaluation of the Measurement Model
In order to ensure the validity and reliability of our structural model, it is paramount to evaluate the reliability of the latent variables. Consequently, we delve into the discriminant validity, internal consistency reliability, and convergent validity initially.
§.§.§ Discriminant Validity
In this context, discriminant validity refers to the distinctness or uniqueness of one latent variable compared to another. This serves as an essential parameter for determining whether two constructs are essentially the same and represent varying facets of knowledge. For its evaluation, we utilized the Heterotrait-Monotrait ratio of correlations (HTMT) which is recognized for its superior performance over other tests like the Fornell-Larcker criterion. The HTMT values should ideally be below 0.90.
Table <ref> reveals that all coefficients fall beneath the predefined threshold, which suggests that every construct in the model represents a distinct phenomenon.
§.§.§ Internal Consistency Reliability
This test seeks to confirm that the items are gauging the latent variables in a consistent and reliable manner. As such, we refer to the Cronbach's Alpha, rho_a, and rho_c values showcased in Table <ref>, all of which should exceed 0.60 <cit.>.
We can conclude that our tests meet the reliability criterea.
§.§.§ Convergent Validity
The final validity assessment examines the extent of correlations between various items and their corresponding construct. It is noteworthy that our latent variables are reflectively measured (Mode A)[For a comprehensive comparison between reflective and formative measures, see Russo & Stol (2021).]. As a result, these indicators should demonstrate a substantial variance proportion by converging on their latent variables. Two tests were employed to verify this assumption.
The first test involves the average variance extracted (AVE), which should register a value exceeding 0.5 <cit.>. The second test involves ensuring that the outer loadings of each measurement model for the latent variable account for at least 50% variance. This is tested by assessing the indicator's reliability, which should exceed the square root of 50%, i.e., 0.7.
Table <ref> encapsulates the results of the indicator's reliability using cross-loadings. The items that did not contribute significantly to the variance and were subsequently excluded from our model during the analyis phase (a complete list of excluded items can be found in Table <ref>. Consequently, an improvement in the AVE was noted, thereby reinforcing the model's robustness.
§.§ Evaluation of the Structural Model
After ensuring the reliability of all our constructs through our Measurement Model assessment, we can now shift our focus towards evaluating the Structural Model, graphically represented in Figure <ref>. This evaluation is pivotal in discussing the predictive power of our model and validating our research hypotheses.
§.§.§ Collinearity Analysis
Initially, we analyze the correlation between the exogenous variable (Personal and environmental factors) and other endogenous variables. These should be independent to prevent any potential bias in the path estimations. The Variance Inflation Factor (VIF) test, which detects multicollinearity (i.e., an extreme degree of collinearity), should have a value under five <cit.>. Our VIF values are below this threshold, ranging from 4.710 (for the IU_3 item) to 2.034 (PEF_4). Consequently, we deduce that our model doesn't suffer from multicollinearity issues.
§.§.§ Path Relations: Significance and Relevance
Path coefficients represent the hypothesized relationships between latent variables. They are standardized, meaning their values can range from -1 to +1. PLS-SEM does not require distributional assumptions, which means we can not use parametric tests to assess significance. As a workaround, we use a two-tailed bootstrapping method that incorporates 5,000 subsamples with replacement.
Details of the bootstrapping results are presented in Table <ref>, which includes the bootstrapping coefficients, mean, standard deviation, T statistics, and p-values corresponding to each of our seven hypotheses.
Our analysis reveals that a majority of our hypotheses are statistically significant. Specifically, four relationships have p-values less than 0.05, and their T statistics surpass 1.96, indicative of a 5% significance level as noted by Hair et al.
§.§.§ Evaluation of Determination Coefficients and Effect Sizes
After affirmatively determining the significance of the majority of our hypotheses, we now proceed to the final phase of our study, which centers on the predictive power of the endogenous constructs. This is shown in Table <ref>. The predictive capacity is quantified through the variance explained (R^2) by the endogenous constructs. R^2 signifies the ratio of the variance in the dependent variable that can be predicted from the independent variables. As R^2 increases with the number of predictors (more predictors equate to a higher R^2), it is prudent to consider the adjusted R^2 as well, which takes into account the quantity of predictors in the model. The values for these metrics lie between 0 and 1. Determining a standard for R^2 can be challenging as its significance heavily relies on the topic at hand <cit.>, but it is generally accepted that it should exceed 0.19 <cit.>.
§.§.§ Evaluating Predictive Efficacy
Given that our study's primary objective is to ascertain predictions rather than causality, we undertook a predictive results evaluation employing PLSpredict <cit.>. This approach tests if a model (developed using a training sample) can predict the outcomes from a test sample. Our sample was segmented into ten parts, and ten repetitions were utilized to derive the PLSpredict statistics. The results were interpreted based on the guidelines proposed by Shmueli et al. <cit.>. Table <ref> portrays the latent variables' predictive accuracy. Notably, all variables display a strongly positive Q2_predict indicating robust performance of the model.
§.§.§ Assessing Predictive Consistency
Our final examination focuses on the predictive consistency of our model, for which we scrutinize the effect sizes (f^2) as illustrated in Table <ref>. This assessment involves understanding the impacts of various relationships within the model. The threshold values for effect sizes are set at 0.02, 0.15, and 0.35, corresponding to small, medium, and large effects, respectively <cit.>. Here, the relationship with the strongest effect are compatibility factors with intention to use LLMs[Although larger effect sizes are not inherently problematic, they can occasionally suggest a potential risk of overfitting. However, we have performed a comprehensive examination of this potential issue in Appendix <ref> and concluded that overfitting is not present in our model.].
§.§ Determining Key Factors for Adoption: An Analysis using Importance-Performance Map
This focused study specifically explores the elements influencing the adoption and intended use of Generative AI tools in the field of software engineering. Using the Importance-Performance Map Analysis (IPMA) methodology, we have combined the analysis of both the importance and performance dimensions derived from the PLS-SEM investigation <cit.>. This approach allows us to determine the extent to which various constructs contribute to the enhancement of the target construct - in this case, the Intention to Use (IU) and Adoption of Generative AI tools. It provides strategic insights by identifying which constructs are most significant and which ones demand improvements in performance.
Table <ref> shows that all identified constructs, namely Compatibility Factors (CF), Personal and Environmental Factors (PEF), Perceptions about the Technology (PT), and Social Factors (SF), demonstrate robust performance, all exceeding 65%, with PT and CF even surpassing the 83% mark. This result is noteworthy, especially considering that established models such as the technology acceptance model typically show a constructs' performance range between 50% and 70% <cit.>.
The importance of individual constructs as shown in Table <ref> is consistent with those of mature models, with values between 0.110 and 0.767. In particular, PT and CF emerge as the most significant constructs influencing IU. This finding emphasizes the role of perceptions about the technology and compatibility factors in the intention to use and the eventual adoption of Generative AI tools in software engineering.
Figure <ref> provides a visualization of the interplay between the importance and performance of these constructs. For instance, a unit increase in the performance of PT (from, say, 85.138 to 86.138) would improve the IU by the total effect of PT on IU, which is 0.540. This suggests that if the goal is to increase IU and Adoption, emphasis should be placed on enhancing PT, given its high importance. Similarly, also CF play a crucial role to support AI adoption. On the other hand, constructs such as SF and PEF, despite their role, appear less critical to the intention to use and adoption of Generative AI tools.
§ DISCUSSION
In our investigation of Generative AI adoption in software engineering, we developed the Human-AI Collaboration and Adaptation Framework (HACAF) to dissect the interplay between perceptions about the technology, compatibility factors, social factors, and personal and environmental factors. The results revealed a complex landscape, with surprising deviations from traditional technology acceptance theories.
Table <ref> summarizes our key findings and implications.
Our qualitative study was foundational in the development of our framework, which stresses the impact of perceptions about the technology and compatibility factors in shaping the intention to use LLMs. However, the subsequent quantitative study upended the expectation that these perceptions directly influenced intention to use, thus contradicting the traditional Technology Acceptance Model (TAM) <cit.>. This suggests that, when adopting LLMs, the compatibility with existing work processes significantly impacts perceptions about the technology.
The central role of compatibility factors aligns with previous findings <cit.> and reaffirms the essential need for technological alignment with current practices. Our study revealed that software engineers are more inclined to adopt LLMs when the technology fits seamlessly into their existing workflows, a finding in line with prior research <cit.>.
In contrast to expectations, the quantitative investigation indicated that social factors did not significantly contribute to the intention to use LLMs. This deviation from previous studies <cit.> underlines the nuanced influences of social aspects and self-efficacy on the adoption process.
Personal and environmental factors, while influential in shaping perceptions about the technology, didn't directly impact the intention to use. This observation supports the assertion by <cit.> that personal innovativeness might mold perceptions about a technology but does not necessarily guarantee adoption.
The insights derived from our exploration of LLM adoption using the HACAF theory both diverge from and extend beyond established theoretical frameworks like TAM, DOI, and SCT, shedding light on the complex dynamics at play. These findings carry both academic and practical implications, underlining the necessity for holistic strategies that consider individual perceptions and compatibility factors for promoting LLM adoption.
Our research places a clear emphasis on Compatibility Factors as primary drivers in the adoption process of Generative AI technologies. Compatibility factors, in essence, measure how well a new technology fits into an individual or organization's existing framework—both in terms of practical workflow and broader values. They are essential in determining the successful adoption of technologies, such as Generative AI.
The importance of seamless integration within existing workflows is the core driver towards AI adoption. Workflows refer to the defined sequence of tasks that a software engineering team performs regularly, such as, coding, debugging, testing, and deployment. If an LLM can integrate smoothly into these steps by e.g., automating certain coding tasks or improving debugging efficiency — it is seen as highly compatible. Conversely, if the integration of LLM disrupts these established procedures or necessitates significant changes to existing operations, it might be deemed less compatible, and its adoption may face resistance.
This idea extends beyond workflows to include the broader technological environment. If the LLM requires different operating systems or specific hardware that is not in use, or if it relies on knowledge or skills that the team does not possess, it can make the tool less compatible. Therefore, the technology compatibility and the alignment with existing skills are vital considerations.
In other words, even if a tool holds significant potential utility, if an individual or an organization fails to understand how it fits within their existing framework, i.e., workflow, technical environment, or value system — its adoption becomes less likely. This insight brings into focus the critical role of compatibility in designing and promoting new technology. For successful integration within the software engineering industry, Generative AI tools should be designed with a deep understanding of the existing systems, practices, and values in mind.
Another noteworthy aspect of our findings is the non-significant relationship between Social Factors and Intention to Use, despite the positive influence of perceptions about the technology on Social Factors. This unexpected finding could be contextualized by considering the nascent stage of the Generative AI transformation within the industry. Despite the recognition of the potential benefits and enhancements to self-efficacy offered by LLMs, this does not necessarily catalyze immediate widespread adoption, suggesting that mere perceptions about a technology's potential advantages may not be sufficient to prompt its adoption.
Interestingly, we found that Personal and Environmental Factors did not directly lead to the Intention to Use, stressing the early stage of LLM adoption. In this initial phase, personal innovativeness and organizational support appear to exert limited influence on adoption, placing the emphasis squarely on Compatibility Factors. This suggests that as AI tools are more seamlessly integrated within existing workflows, their likelihood of adoption increases.
It is critical to acknowledge that we are currently in the nascent stages of the Generative AI transformation. As the utilization of these tools gains wider traction over time, we anticipate that the relationships within HACAF will be further corroborated. The current non-validation of all relationships within our framework model does not detract from its utility but offers initial insights into the factors shaping the adoption of Generative AI tools at this stage.
This study's significance lies not only in its immediate findings but also in establishing a foundation for understanding software developers' perceptions of AI during this early phase. This insight is indispensable for guiding the design and implementation of these tools to align with user needs and expectations.
Lastly, while this study offers an important snapshot of the current state of Generative AI adoption in software development, a comprehensive assessment of the HACAF model necessitates long-term and longitudinal studies. Such investigations would facilitate a more profound understanding of the evolution and interplay of the factors affecting adoption as the technology matures and gains wider adoption. Longitudinal studies will provide nuanced insights into changing adoption patterns, further refining the HACAF model over time, and contributing to a more sophisticated understanding of technology adoption phenomena.
§.§ Implications
The findings of this study have far-reaching implications for practitioners and researchers in software engineering and artificial intelligence, offering new insights from the perspective of the Human-AI Collaboration and Adaptation Framework (HACAF). The integration of AI, specifically Generative AI tools like Large Language Models (LLMs), into software development processes has shown promise, leading to potential advancements in various aspects of the field.
A key implication of our study is the need for organizations to consider investing in AI-driven tools that fit well within existing development workflows. Such alignment can lead to improved decision-making, better resource allocation, and process optimization. It can potentially yield outcomes like reduced development time, enhanced software quality, and increased user satisfaction. Organizations, therefore, need to deliberate on the benefits of adopting AI-driven solutions, especially those tailored to their specific needs and contexts.
Given our findings on the importance of Compatibility Factors in driving AI adoption, there is an urgent need for continuous education and training in AI as it becomes more prevalent in software engineering. As AI matures and evolves, organizations should prioritize ongoing training to keep their employees updated with the latest AI technologies and their applications in software engineering. Concurrently, academic institutions need to ensure that their software engineering curricula reflect these changes, adequately preparing future professionals for the rapidly evolving industry landscape.
Our study also underscores the significance of the human-in-the-loop approach when incorporating AI in software engineering. AI systems should be designed to enhance human capabilities and foster effective collaboration between developers and AI tools. As such, a transparent decision-making process, allowing developers to calibrate AI systems based on their expertise and experience, is pivotal. This accentuates the necessity for research and development aimed at creating AI solutions that complement human capabilities, as opposed to replacing them.
Importantly, the successful deployment of AI in software engineering relies heavily on aligning AI techniques with the unique needs and contexts of each organization. A careful assessment of requirements, resources, and constraints should precede the adoption of any AI-driven solution. This guarantees that the selected technologies can be effectively harnessed for process improvement and the attainment of desired outcomes. A context-driven approach to selecting AI techniques and tools is recommended, considering the unique characteristics of each software development environment.
In summary, the integration of AI techniques, especially Generative AI tools like LLMs, in software engineering holds immense potential for enhancing the development process and improving overall software quality. By acknowledging and addressing the challenges associated with AI adoption—taking into consideration our findings on the importance of Compatibility Factors—organizations can effectively leverage AI-driven tools and methodologies to realize superior outcomes in their software development pursuits.
§.§ Limitations
Our threats to validity are deliberated in relation to the application of both qualitative and quantitative validity frameworks, as advised by earlier research <cit.>. Consequently, we commence our discourse on the credibility, transferability, and confirmability for qualitative examination <cit.>.
Credibility. The principal variables of our investigation were informed by the three core theoretical models that assess individual, technological, and social-level influences. These models are the Technology Acceptance Model (TAM), the Diffusion of Innovations theory (DOI), and the Social Cognitive Theory (SCT). By integrating these thoroughly validated theories into our study, we could deeply understand the factors influencing language model adoption, and further probe how these variables are implemented within the software engineering domain. Moreover, to ensure the credibility of our data, informants underwent a rigorous multistage selection process, verifying their roles as software engineers actively working with Generative AI tools. This meticulous selection process fortified the integrity of our study and the resulting insights.
Transferability and Confirmability. Our qualitative data analysis was performed by a single researcher, applying the Gioia Methodology to the collected data. The utilization of a single researcher for data analysis could be perceived as a limitation due to potential biases and subjectivity. However, employing the Gioia Methodology significantly mitigates these biases, contributing to the trustworthiness of our findings. The Gioia methodology provides a structured approach that emphasizes transparency, iterative categorization, and systematic data processing, which lends itself to limiting subjective bias <cit.>.
Moreover, our research merged qualitative data with a sample study to bolster the transferability of our findings. Transferability, as defined in qualitative research, refers to the extent to which the results can be applied in other contexts or with other respondents <cit.>. By incorporating a sample study, we provided a broader base from which parallels could be drawn, thereby enhancing the applicability of our research to varied contexts.
Despite surveying a broad group of participants, limitations were present as we were unable to conduct follow-up inquiries. This may potentially affect the depth and comprehensiveness of our data. Yet, our utilization of the Gioia Methodology and the combination of qualitative and sample study data have fortified the structure and interpretative robustness of our data.
Furthermore, we delve into statistical conclusion, internal, construct, and external validity to assess the quantitative investigation <cit.>.
Internal. Our research model was validated using a cluster-randomized probability sampling strategy <cit.>. Because of feasibility issues, we selected a cluster of the global population (i.e., the Prolific community) instead of the entire population. Although less accurate than random sampling, cluster sampling is more cost-efficient. In response to Baltes and Ralph's call, which noted that less than 10% of software engineering studies in top venues utilize probability sampling <cit.>, we designed our study accordingly. Our data quality was boosted by a multi-stage process in which only 184 carefully selected professionals were chosen out of the 831 potential candidates identified (approximately 22% of the initial candidates). Nevertheless, our sample is not representative of the software engineering population.
External. The generalization of our findings was a significant concern during the PLS-SEM analysis as sample studies are best suited for theory generalization <cit.>. We gathered 184 responses, an ample size considering the a priori power study we conducted prior to data collection.
Construct. Constructs were gauged via a single-informant approach, embodying a software engineer's viewpoint. Additionally, we employed self-reported measures, asking participants to express their agreement level on literature-derived indicators. However, these questions might not have been accurately answered. To counteract these limitations, we introduced three random attention checks, which eleven candidates failed. Furthermore, we adjusted our measurement instrument based on pre-existing ones. Lastly, we randomized the questionnaire and tested it for clarity and consistency to manage potential accuracy biases.
Statistical conclusion. We processed the survey results using Partial Least Squares – Structural Equation Modelling with the renowned statistical software SmartPLS (4.0.9.5), which has been utilized in over 1,000 peer-reviewed articles <cit.>. All statistical methods and tests employed for the PLS-SEM analysis are in line with the most recent guidelines in our field <cit.>.
§ CONCLUSION
This study presents a mixed-methods investigation about the adoption of Generative AI tools within the field of software engineering. By developing the Human-AI Collaboration and Adaptation Framework (HACAF), our research provides a nuanced understanding of the complexities involved in the adoption of such technologies, particularly during these nascent stages of the Generative AI transformation.
Our findings shed light on the pivotal role of Compatibility Factors, emphasizing the need for AI tools to fit within existing development workflows to enhance their adoption. This points towards the understanding that adoption is not driven solely by the perceived benefits of the technology, but by its seamless integration into the user's pre-established work processes.
Beyond its theoretical contributions, this study has significant practical implications. It offers early insights into software developers' perceptions of AI, providing valuable pointers for the design and refinement of user-focused AI tools. These insights can help foster a more widespread adoption of AI tools by addressing developer concerns and optimizing tool compatibility with existing workflows.
As we look forward, future research in the nascent field of AI and software engineering can build upon the foundation laid by this study.
While this study delivers an important snapshot of the current state of Generative AI adoption in software development, it is crucial to recognize that a comprehensive assessment of the HACAF model necessitates long-term and longitudinal studies. Such investigations would permit a deeper understanding of the evolution and interplay of factors influencing adoption as the technology matures and gains broader acceptance. These longitudinal studies would offer nuanced insights into the changing adoption patterns, thus allowing the continuous refinement of the HACAF model and contributing to a more sophisticated understanding of technology adoption phenomena.
§ SUPPLEMENTARY MATERIALS
The PLS-SEM computational tables, raw data, the survey instruments, and the overfitting analysis are openly available under a CC BY 4.0 license on Zenodo, DOI: <https://doi.org/10.5281/zenodo.8124332>.
§ ACKNOWLEDGMENT
This work was supported by the The Danish Industry Foundation with the Sb3D project — Security by Design in Digital Denmark.
ChatGPT-4 has been used to ensure linguistic accuracy and enhancing the readability of this article.
ACM-Reference-Format
§ APPENDIX A (SURVEY INSTRUMENT)
§ APPENDIX B (OVERFITTING ANALYSIS)
The notably high effect size observed between `Perception about the Technology' and `Compatibility Factors' might raise concerns of potential overfitting, a scenario where the model inadvertently learns noise in the training data, thereby compromising its ability to accurately predict unseen data <cit.>. Given the potential implications of overfitting on the reliability of our model, it is crucial to thoroughly investigate this issue.
The details of this analysis, along with the associated code and data, can be found in the online supplementary materials hosted on Zenodo[Link to the replication package: https://doi.org/10.5281/zenodo.8124332.].
First, we analyzed the residuals of our model, which can provide insights into the appropriateness of the model fit. The residuals represent the difference between the observed and predicted values for the dependent variable. In a well-specified model, we would expect the residuals to be randomly scattered around zero, with no apparent pattern. This would suggest that the model's errors are random, and that the model is correctly specified.
We plotted the residuals against the predicted values for the 'Intention to Use' construct and found that they were indeed mostly randomly scattered, suggesting that our model's errors are random. Figure <ref> shows this residuals plot.
However, to evaluate the risk of overfitting more thoroughly, we employed two key methodologies: a train-test split and cross-validation <cit.>. In the train-test split, we partitioned our data into a training set (80% of the data) and a testing set (20% of the data). Our model was trained on the training data and then evaluated on the unseen testing data.
The model's performance was assessed using the mean squared error (MSE), a metric that calculates the average squared difference between the observed and predicted values. A lower MSE indicates a better fit to the data. The MSE for the test set was approximately 0.523.
To further examine overfitting, we implemented k-fold cross-validation with k set to 5, a common choice for this parameter <cit.>. In this approach, the data was divided into 5 subsets, and the model was trained and tested 5 times, each time on a different subset of the data. The performance of the model was again assessed using the MSE. The mean MSE from the cross-validation was approximately 0.676, reasonably close to the test MSE, suggesting that our model is not overfitting the data.
In conclusion, both the train-test split and cross-validation results suggest that our model is generalizing effectively to unseen data and is not overfitting.
|
http://arxiv.org/abs/2307.04589v1 | 20230710143040 | Harnessing the Power of Swarm Satellite Networks with Wideband Distributed Beamforming | [
"Juan Carlos Merlano Duncan",
"Vu Nguyen Ha",
"Jevgenij Krivochiza",
"Rakesh Palisetty",
"Geoffrey Eappen",
"Juan Andres Vasquez",
"Wallace Alves Martins",
"Symeon Chatzinotas",
"Björn Ottersten"
] | eess.SP | [
"eess.SP"
] | |
http://arxiv.org/abs/2307.04326v1 | 20230710033943 | Automotive Radar Mutual Interference Mitigation Based on Hough Transform in Time-Frequency Domain | [
"Yanbing Li",
"Weichuan Zhang",
"Lianying Ji"
] | eess.SP | [
"eess.SP"
] |
Automotive Radar Mutual Interference Mitigation Based on Hough Transform in Time-Frequency Domain
Yanbing Li, Member, IEEE,
Weichuan Zhang, Member, IEEE,
and Lianying Ji
This work was supported by the Fundamental Research Funds for the Central Universities 2022RC008.
Yanbing Li is with the School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, 100044, China (e-mail: [email protected]).
Weichuan Zhang is with the Institute for Integrated and Intelligent Systems, Griffith University, QLD, Australia. (e-mail: [email protected]).
Lianying Ji is with the Beijing Muniu Linghang Technology Company, Beijing, 100192, China (e-mail: [email protected]).
August 12, 2023
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
With the development of autonomous driving technology, automotive radar has received unprecedented attention
due to its day-and-night and all-weather working capability. It is worthwhile to note that more and more vehicles are equipped with automotive radars, resulting in mutual interference between radars. The interference reduces radar target detection performance, making perception information unreliable. In this paper, a novel interference mitigation method based on power-weighted Hough transform is proposed for solving the radar mutual interference and improving the safety of autonomous driving systems. Firstly, the frequency modulation characteristics of interference signals and target echo signals are analyzed, and differences between the two signals are introduced. Secondly, based on the straight line detection technique, the power of the mutual interference signal in time-frequency domain is accumulated, and the accurate position of the interference is located. Finally, the target echo is recovered by autoregressive model. Compared with existing state-of-the-art methods, the proposed method has the ability to retain more useful signals after the interference mitigation, and achieve better interference detection robustness under low signal-to-noise ratio conditions. Simulation experiments and real scenario experiments verify the effectiveness of the proposed
method and show its superiority.
Automotive radar, Hough transform, interference mitigation, millimeter-wave radar, time-frequency spectrogram.
§ INTRODUCTION
Radar, as an environmental sensing technology, has been introduced in more and more civil fields such as automotive radar, traffic radar, and security radar. On one hand, this is due to the development of chip technology, especially millimeter-wave chip technology. These advances have made it possible to reduce radar design cost and difficulty, which allows radar manufacturers to iterate their products rapidly <cit.>. On the other hand, the trend of intelligence has led to an unprecedented emphasis on perception technology in many aspects of people’s life, which provides necessary and reliable perception data for the post-processing stage.
One of the most representative civilian applications is automotive radar. From low level assisted driving to high level autonomous driving, radars are included as an important sensor in autonomous driving solutions <cit.>. It is well known that no single sensor has ability to acquire all desired information well in the real-world scenarios of all conditions. Along this way, multi-sensor fusion techniques are increasingly being used for autonomous driving. As one of the three mainstream sensors, i.e., cameras, radars, and lidars, radars have ability to day-and-night and all-weather work, which is not well demonstrated by the other sensors. Meanwhile, radars have advantage in the radial distance and velocity measurement of targets, which is complementary to the information of the other sensors. A typical autonomous driving solution is equipped with seven millimeter-wave radars in a vehicle, which contains one long-range radar for forward looking, two mid-range radars for both forward and rearward looking, and four short-range radars in the four corners for 360 degree coverage <cit.>. This configuration allows radar sensors on a single vehicle to radiate in all directions on roads. In this case, with the development of autonomous driving, the deployment rate of automotive radars will increase rapidly in the future. As a result, influence among radars become inevitable <cit.>. Interference among radars may lead to target detection degradation and increase the likelihood of target loss in severe cases, which is unacceptable for traffic safety <cit.>.
Generally, there are two main categories of radar interference. One category is caused by radar devices interfering with each other, and the other category is spoofing attacks performed by jamming devices. The latter is similar to electronic warfare in military applications and is usually introduced in malicious attacking <cit.>. A research on the suppression of malicious jamming such as digital radio frequency memory (DRFM) jamming is discussed in <cit.>. Compared with spoofing attack jamming, the problem of mutual interference between radars is more common in practical scenarios, especially in high-density traffic flow scenarios.
Many research analyzing mutual interference of automotive radars can be found in <cit.>. These sources discuss the occurrence probability of mutual interference between radars, calculate the theoretical value of interference power, and illustrate the interference signal in the time domain, the frequency domain and the time-frequency (TF) domain, respectively. These studies deepen our understanding of the mutual interference for automotive radars and indicate that the mutual interference will worsen signal quality and signal-to-noise ratio (SNR), thereby affecting the target detection ability of radars <cit.>.
Methods used for solving the aforementioned issues can be categorized into two groups according to the degree of dependence on radar system architecture. The first group is coupled with the radar system, and its implementation usually requires specific software and hardware architectures. Approaches based on transmit waveforms such as orthogonal noise and phase-coded are proposed in <cit.>. Another waveform optimization approach is proposed in <cit.>. These methods suppress interference based on the special structure of waveforms. Digital beamforming methods based on radar antenna array structure are discussed in <cit.>, in which interference in specific directions can be suppressed by the directivity of a formed beam. Because interference sources and targets are usually in the same or close direction in a traffic scene, the digital beamforming methods face angle resolution challenges. Inspired by the biological behavior of bats, a heuristic frequency hopping technique is introduced in <cit.>. When interference occurs, a radar with higher frequency shifts its frequency upwards, while a radar with lower frequency shifts its frequency downwards. This strategy has a higher success rate for interference mitigation than random hopping way. Alternately, radar and communication cooperation is employed for solving mutual interference <cit.>. A distributed network protocol that enables the radar and communication systems to work together is designed, then the avoidance of mutual interference among radars can be achieved due to information sharing. The above-mentioned methods can realize interference mitigation by designing specific system functions, which achieves good effect in designated situations. However, these methods require constraints on radar system design, thereby increasing the development cost and difficulty of radar products.
Another group of methods does not customize the radar software and hardware, but uses signal processing techniques, i.e., signal detection and reconstruction, for suppressing interference on the existing radar system architecture, which has good versatility in practice. In terms of the acquisition domain of interference information, these methods can generally be divided into time domain, frequency domain, and TF domain methods. An adaptive noise canceller that uses interference information in negative frequencies to cancel the interference in positive frequencies is proposed in <cit.>. This is a typical implementation of interference mitigation in the frequency domain. Besides frequency domain methods, most of current interference mitigation methods are implemented in the time domain and the TF domain. Zeroing or adding a raised cosine window for the disturbed part of a received signal is adopted in <cit.>. These two ways achieve the attenuation of interference power, yet lose useful signals in the overlapped part with the interference. Wavelet decomposition and denoising is used in <cit.> for removing the interference. Due to the decomposition characteristics of the wavelet transform, useful signals in the undisturbed components can be well retained. Signal reconstruction by autoregressive (AR) model is proposed in <cit.>, which has ability to extrapolate useful signals in the interfered part and retrieve more target information than the zeroing and the windowed methods, however, reconstruction quality will be degraded when a interfered segment is wide. Another signal reconstruction method named iterative method with adaptive thresholding (IMAT) is proposed in <cit.> for overcoming the signal gap introduce by zeroing. The IMAT method is a sparse reconstruction technique from main frequency components essentially. All the methods mentioned above obtain interference information from the time domain, and suppress the interference accordingly.
More recently, a research in <cit.> shows that more structural information of interference can be observed in the TF domain. In this case, more differences between the target echo and the interference can be extracted in the TF domain than in the time domain. TF analysis of a received signal in a interference scenario is performed for locating interference time span region in <cit.>, followed by beat frequencies interpolation for recovering the target echo. Another TF analysis based method is introduced in <cit.>. Here the interference is located by a constant false alarm rate (CFAR) detector, followed by a reconstruction process by zeroing, amplitude correction, and Burg-based signal extrapolation, respectively. Experimental results demonstrate that the methods based on TF analysis are superior in interference mitigation performance to time domain methods.
Although the existing TF domain methods <cit.> have shown superiority to the time domain methods <cit.> in performance, we still have to resolve whether the characteristic information of the interference in the TF domain is fully exploited. For instance, the CFAR based method <cit.> detects and suppresses interference in frequency slices along the TF spectrogram, without considering the time-frequency variation characteristics of the interference. In this case, interference detection is based on the ratio of the interference power at a certain point to the noise level. A good interference detection performance can be obtained under high interference-to-noise ratio (INR) conditions. However, when the interference power is weak, e.g., the interferer radar is far from the victim radar, the projection of the interference power onto each frequency slice may not be enough for supporting accurate interference detection in the TF domain. In this way, degraded interference mitigation performance may be occurred in low INR conditions for the CFAR based method. Based on aforementioned facts, our main question are: (1) Is there a joint time and frequency characteristic of the interference in the TF domain? And whether this time-frequency characteristic can be effectively extracted for detecting and mitigating the interference? (2) Can the INR be improved for enhancing the interference detection performance? Focusing on these two questions, our research demonstrates that the interference has obvious joint time and frequency structural characteristics on the TF plane, that is, it appears as a straight line with a large slope. In addition, inspired by the incoherent integration method in radar target detection <cit.>, the line structure characteristics of the interference can be used to accumulated the interference power in the TF domain, thus achieving good interference detection performance.
In this paper, the mutual interference of frequency modulated continuous wave (FMCW) automotive radars based on the TF domain is discussed, and a robust interference detection and mitigation approach by power-weighted Hough transform is proposed. To the best of our knowledge, so far there is no research that considers the structure information in the TF domain to robustly detect and locate interference, especially in weak interference and low SNR conditions. Compared with the existing interference mitigation methods based on signal processing technology, the contributions of this paper are as follows:
* The first mutual interference detection method for automotive radar in terms of structure information in the TF domain is proposed. By analyzing interference signals in a radar receiver, we conclude that the interference in baseband has a linear frequency modulation (LFM) characteristic, i.e., it behaves as a straight line in the TF domain. Based on this structure feature, the Hough transform is used to locate the accurate position of the interference in the TF domain.
* For the first time, the way of power accumulation is introduced into the problem of the interference detection in the TF domain. The classical Hough transform is modified for the TF spectrogram of an FMCW radar signal, namely intensity information is introduced into the Hough transform for the power accumulation. After achieving the interference power accumulation in the Hough parameter space, the INR increases, hence improving the stability of the interference detection
* Compared with the interference mitigation methods based on the time domain, the proposed method has the ability to handle the case of multiple interference. Furthermore, the proposed method is also effective when interference duty cycle is large.
The rest of the paper is organized as follows. Section <ref> introduces the signal models of the FMCW radar signal and the mutual interference. Then an interference mitigation algorithm based on power-weighted Hough transform is presented in Section <ref>. Numerical simulations and experimental data based results are shown and discussed in Sections <ref> and <ref>, respectively, to evaluate the interference mitigation performance of the proposed method. Finally, Section <ref> concludes this paper.
§ LINEAR FMCW SIGNAL MODEL IN RADAR MUTUAL INTERFERENCE CASES
§.§ Linear FMCW Signal Model without Interference
A LFM signal, also named a chirp signal, is the most common waveform used in a FMCW radar system in real
applications <cit.>. Usually, a set of LFM signal sequences is transmitted from a radar antenna to sense environment. The single transmitted LFM signal is
s_t(t) =√(2 P_t)cos [2 (t)]
=√(2 P_t)cos[2 (f_c+1/2 k t) t],
where f_c is the central carrier frequency, P_t is the transmitted power, and k is the chirp rate which equals the ratio of the chirp sweep bandwidth B to the chirp sweep time T, i.e., k=B/T. The frequency of the transmitted signal is
f_t(t) =d (t)/d t
=f_c+k t .
Thus a frequency modulation direction is defined as up-chirp when k>0 and down-chirp when k<0, respectively.
An echo scattered by a target contains added amplitude and Doppler information related to the target’s radar cross section (RCS) and velocity, respectively. For a single-target scenario, the power of the target echo related to free space attenuation is
P_e=P_t G^2^2/(4 )^3 R^4,
where is the wavelength of the transmitted signal, G is the antenna gain on the line of sight (LOS), is the target RCS representing the ability to scatter the power of electromagnetic waves, and R is the distance between the radar and the target on the LOS. The target distance causes a delay between the target echo and the radar reference signal, which is
=2(R+v t)/c,
where c is the light speed, v is the relative velocity between the target and the radar on the LOS which causes the Doppler frequency shift. From (<ref>) and (<ref>), the echo with one target is
s_e(t)=√(2 P_e)cos [2 (t-)],
When there exists multiple targets, the echo signal is the superposition of these target’s echoes.
§.§ Linear FMCW Signal Model with Interference
When there is interference, the target echo and the interference are superimposed, and then received by a receiver antenna. For a single-interference scenario, without loss of generality, assuming a interferer radar has the same radio frequency (RF) and antenna specifications with a victim radar, i.e., the two radars have the same transmitted power P_t, wavelength , and antenna gain G, then the interference power in the receiver of the victim radar is
P_i=P_t G^2^2/(4 )^2 R_i^2,
where R_i is the distance between the interferer radar and the victim radar on the LOS. It is worthwhile to note that R_i will be equal to R when the interferer radar is installed on the target. Accordingly, the interference signal is
s_i(t)= √(2 P_i)cos[2 _i(t-_i)]
= √(2 P_i)cos[2 f_c i(t-_i)+1/2 k_i(t-_i)^2],
where f_c i and k_i are the central carrier frequency and the chirp rate of the interferer radar respectively. _i is the time delay between the interference and the reference signal. When there exists multiple interference radars, the total interference signal is the superposition of each interference represented in (<ref>). According to (<ref>) and (<ref>), the signal-to-interference ratio (SIR) at the victim radar receiver is
SIR=P_e/P_i=R_i^2/4 R^4.
From (<ref>) and (<ref>), the total signal received by the radar receiver is
s_r(t)=s_e(t)+s_i(t)+g(t),
where g(t) is the receiver noise. Dechirp processing of the received signal is achieved by using a low noise amplifier (LNA) and mixing with the reference signal. From (<ref>), (<ref>), (<ref>), and (<ref>), a beat-frequency signal in baseband can be derived as (<ref>), where _b and _b i are the constant phase terms. Accordingly, the beat frequency introduced by the target is <cit.>
f_b=k ,
and the beat frequency introduced by the interference is
f_b i=f_c-f_c i+k_i_i+1/2(k-k_i) t ,
which is a LFM signal. Substituting (<ref>) and (<ref>) into (<ref>), the beat-frequency signal can be rewritten as
s_b(t)=A_bcos(2 f_b t+_b)+A_b icos(2 f_b i t+_b i)+g(t),
where A_b=2 √(P_t P_e) and A_b i=2 √(P_t P_i) are the power of the beat-frequency signal for the target and the interference respectively. Thus, the total beat-frequency signal consists of
three parts, namely the target, the interference, and the noise.
After the dechirp processing, the beat frequency signal is filtered by a low-pass filter (LPF) whose function is to prevent signal aliasing during subsequent analog-to-digital sampling by an analog-to-digital converter (ADC). Then three fast Fourier transform (FFT) processes, i.e., range FFT, Doppler FFT, and spatial FFT, are applied to the digital signal for estimating the distance, the velocity, and the angle information of the target <cit.>. A schematic diagram of the FMCW radar system is shown in Fig. <ref>.
§ INTRODUCTION TO INTERFERENCE MITIGATION METHOD
§.§ Signal Characteristics and Method Motivation
Car detection in a typical mutual interference scenario is shown in Table <ref>. A car with an interferer radar is present at 100m, and another interferer radar is present at a distance of 2000m. In this case, for a ego radar, SIRs between the car echo and the interference produced by the mounted radar and the distant radar are -41dB and -15dB according to (<ref>), respectively. These SIR levels indicate the interference power is greater than that of the target echo due to one-way propagation effect shown in (<ref>) and (<ref>). As a result, the interferer radar may have an impact on the target detection even if it is far away from the ego radar.
In addition, TF features of the target echo and the inference before and after the dechirp processing are shown in Fig. <ref>. As a result of the dechirp processing, it can be seen from (<ref>), (<ref>), and (<ref>) that the target echo consists of a single-frequency signal, while the interference shows the characteristic of a LFM signal. After low-pass filtering, only signals in passband, which is represented by yellow area in Fig. <ref>, are retained. In this case, the target echo exists in the entire time domain as a single beat frequency signal if its beat frequency is smaller than the cut-off frequency of the LPF. However, since the LFM range of the interference is greater than the LPF passband, the interference will be intercepted by the LPF, which makes the interference exhibits finite range in time, as shown in the second row of Fig. <ref>.
In summary, the target echo and the interference have following characteristics in the automotive radar mutual interference case:
* The target echo in baseband is a single frequency signal, which demonstrates a straight line parallel to the time axis in the TF domain.
* The interference in baseband is a LFM and time limited signal, which demonstrates a straight line with a large slope in the TF domain.
* The interference power is usually greater than that of the target echo due to the difference in signal propagation path. This indicates an automotive radar may be interfered by other radars within a range of kilometers. In this case, the dynamic range of the interference power is large, i.e., both the strong and the weak interferences exist in the received signal of the victim radar.
Based on the signal characteristic analysis in the automotive radar mutual interference scenario, the dynamic range of the interference power is large in a practical scenario, which brings difficulties to interference detection. The existing interference mitigation methods, such as the wavelet and the CFAR based methods, all take advantage of the larger interference power with respect to that of the target echo for interference
detecting. However, the detection performance of the existing methods decreases with lower interference power. Inspired by the noncoherent integration processing in radar target detection applications <cit.>, the interference detection can be performed by exploiting the line feature of the interference in the TF domain, and accumulating the interference power. Based on this motivation, we propose a Hough transform based interference detection approach in a power accumulation sense. In this way, the interference detection performance can be improved by using the accumulation effect on straight line points in the Hough parameter space.
§.§ Interference Detection and Localization Based on Power-Weighted Hough Transform
The characteristics of the interference and the target echo can be obtained by TF analysis technique. STFT is a widely used TF analysis technique due to its good linearity and computational simplicity <cit.>. In this paper, the TF analysis of the received signal is obtained using STFT and the discrete version implemented in practice is <cit.>
S_r(, m)=∑_n=-∞^∞ s_r(n) w(n- D) e^-j 2 m/N n ,
where w(n) is a analysis window (e.g., Hamming window <cit.>), N is the number of frequency samples, D is the hop size between successive DFTs, and denotes the time index. Then, the power spectrogram of the received signal is
P(, m)= |S_r(, m)|^2
= S_r(, m) ·conj(S_r(, m)),
where conj(·) is a conjugate operation. As long as the power spectrogram P(, m) is considered as a special image, it can be used as the input of subsequent Hough transform.
As an effective geometric shape detection method <cit.>,<cit.>, Hough transform has been widely used in many fields such as image processing <cit.> and lane detection based on radar images <cit.>. The classical Hough transform detects straight lines in binary images. It projects straight lines into a parameter space to accumulate the scores of points and the straight lines can be obtained by threshold detection in the parameter space. The line function in the Hough transform is defined as
=x cos ()+y sin () ,
where and are the distance from the line to the origin and the angle of the line respectively. The coordinate (x, y) is used to describe pixel position for the input image, while each point (, ) in the Hough parameter space represents a line in the image. If the line exists in the image, the score of the corresponding point in the parameter space can be measured as
H(, )= ∑_(x,y)(x, y) ,
with (x, y)= {[ 1, if (x, y) is on L; 0, otherwise ]. ,
where L denotes that the line satisfies (<ref>).
Unlike ordinary images, the intensity value of each pixel in the power spectrogram represents the distribution of signal power in the TF domain. From (<ref>), if a signal has a certain power and chirp characteristics at the same time, it appears as a straight line with the corresponding power in the power
spectrogram P(, m). Due to this TF feature, accumulating power information in the Hough parameter space is utilized for improving the performance of the interference detection. The power-weighted score in the Hough parameter space is
H_P(, )=∑_(, m)∈ P P(, m) (, m) .
In addition, considering the slope of the line corresponding to the target echo in the TF spectrogram is close to 0, only lines with large slopes are detected to ensure they correspond to the interference. When the Hough parameter matrix is obtained, the lines can be extracted by threshold detection. With some prior information, the threshold can be determined in a feasible way as follows. In real scenarios, we set a maximum RCS value related to a interested target as _max and calculate the theoretical value of the target echo power according to (<ref>), then the detection threshold is determined as
Thd= P_t G^2^2_max/(4 )^3 R^4,
where is the threshold factor which can be determined in a radar test stage. After obtaining the detection threshold, the lines corresponding to the interference can be extracted if H_p(, )>T, and the interference locations in the spectrogram are found according to (<ref>) is
{[ =, if sin ()=0; m =- () +/sin (), otherwise ]. .
§.§ Interference Mitigation and Target Echo Recovery
Based on the detection results of interference lines, values at interference locations are discarded to achieve interference suppression. Meanwhile, a signal recovery process can be realized by interpolating the discarded locations using neighborhood samples.
For each specific frequency bin slice of the spectrogram, an AR model along the time axis of the spectrogram <cit.> is used for realizing a signal interpolation. The AR model is defined as
S_rec(, m)=∑__n=1^q__n S_r(-_n, m)+,
where q is the number of neighboring samples, is the prediction coefficient and is the residual. AR coefficients can be obtained by least squares. Therefore, the prediction values are obtained in terms of the solved AR model and the gaps at the corresponding interference locations are filled by the predicted signals for achieving the interference mitigation. After traversing all frequency slices by the interference mitigation process, an interference-free spectrogram is obtained.
Finally, a reconstructed target echo without interference is obtained by applying inverse STFT (ISTFT) to the
interference-free spectrogram. The ISTFT is computed by taking the inverse Fourier transform of each time bin slice of the interference-free spectrogram and overlap-adding the inverted signals <cit.> as
s_rec(n)=∑_=-∞^∞ w_i(n- D) 1/N∑_m=0^N-1 S_rec(, m) e^j 2 m/N n,
where w_i(n) defines a synthesis window which is the inverse of the analysis window w(n).
In summary, the proposed interference mitigation flow is shown in Algorithm <ref>.
§ NUMERICAL SIMULATION RESULTS
§.§ Simulation Description and Evaluation Metrics
Numerical simulation is one of the two approaches used for evaluating the performance of the proposed interference mitigation method. An FMCW radar signal flow simulation based on Fig. <ref> is carried out, which includes waveform generation, amplification, and emission, free space propagation, target back scattering and interference superposition, low-noise amplification, dechirp processing, lowpass filtering and ADC sampling. Two sampling frequencies are utilized for simulating analog and digital signals respectively, i.e., a large analog frequency (AF) such as 2GHz, is applied for analog signal simulation while an intermediate frequency (IF) is used for analog-to-digital sampling. The main radar parameter settings used in the simulation are shown in Table <ref>. In the simulation scenario, the interferer radar one and the interferer radar two are set at 30m and 150m from the victim radar respectively.
Two types of targets are set for evaluating different methods as follow:
* A stationary target is presumed to be located at 150m from the ego radar. It is mainly used for evaluating interference mitigation effects in a single chirp signal in Section <ref>, and the influence of different SNRs on interference mitigation performance in Section <ref>.
* A moving target with a speed of 11m/s is presumed to be located at 100m for evaluating velocity measurement performance in Section <ref>.
The performance of the proposed method is compared with seven state-of-the-art methods which included five time domain methods and two TF domain methods. Among them, zeroing, raised cosine window (CW) <cit.>, time domain AR (T-AR) <cit.>, wavelet decomposition <cit.> and IMAT <cit.> methods are implemented in the time domain, STFT beat frequencies interpolation by AR model (STFT-AR) and CFAR-Burg methods are implemented in the TF domain <cit.>. In order to ensure the comparability of each method, a CFAR detector is used for detecting interference positions for all the time domain methods.
The interference mitigation performance in both the time and the frequency domains are evaluated using two time domain metrics and two frequency domain metrics. The first metric is cosine similarity (CS) defined as
CS=s_rec^* s_e/s_rec_2×s_e_2,
where s_e is the target echo, s_rec is the recovered signal of s_e, and * denotes the conjugate transposition. The CS is a metric of an angle between two signal vectors, which can be used to represent the correlation of the two vectors. The closer the CS is to 1, the more correlated the two vectors are. The second time domain metric is error vector magnitude (EVM) defined as
EVM=s_rec-s_e_2/s_e_2.
The EVM is employed to describe the difference between an ideal signal and a recovered signal. A small EVM value means an accurate reconstruction.
The last two metrics, namely peak sidelobe ratio (PSLR) and integrated sidelobe ratio (ISLR) <cit.> are employed to evaluate interference mitigation performance in the frequency domain via range profiles. The PSLR is defined as
PSLR=10 ×log 10(max_m∉ [a,b]F^2(m)/max_m∈ [a,b] F^2(m)),
where F is the spectrum of s_rec and the interval [a, b] bounds the main lobe of the spectrum. The PSLR is employed to describe a ratio between the power of the max sidelobes point with respect to that of the main lobe. The ISLR is defined as
ISLR=10 ×log10(∑_m=1^a F^2(m)+∑_m=b^M F^2(m)/∑_m=a^b F^2(m)),
The ISLR describes a ratio between the energy of all sidelobes with respect to that of the main lobe. For both the PSLR and the ISLR, smaller values indicate lower sidelobe levels, which represents good interference mitigation performance in our applications.
§.§ Noise-Free Simulation Results
Noise-free simulation is used for evaluating the interference mitigation performance firstly. In this case, there are only the target echo and the interference in the received signal, which allows us to quantitatively evaluate the effects of the different interference mitigation methods. The noise-free
signals are shown in Fig. <ref>. The TF distributions of the target echo and two types of the interference in the analog domain are shown in Fig. <ref> (a), the frequencies of the interference and the target echo cross in different time. The interference signals that fell into LPF passband near these intersections are retained, and then received by the radar receiver. The output of LPF and ADC are shown in Fig. <ref> (b) and Fig. <ref> (c) respectively.
The interference mitigation effects of the proposed method are shown in Fig. <ref>. The spectrogram of the received signal with interference is shown in Fig. <ref> (a). The power accumulation and the peak detection result in the Hough parameter space is demonstrated in Fig. <ref> (b). Three peaks corresponding to interference lines are detected, and the interference lines in the TF domain are well indicated as shown in Fig. <ref> (c). Based on these locations, the interference is finely mitigated by the AR model reconstruction process. In this process, the order of the AR model is determined by Akaike information criterion <cit.>. The interference mitigation result in the TF domain is shown in Fig. <ref> (d). Compared with the spectrogram of the received signal with interference as shown in Fig. <ref> (a), the target echo is retained and the interference contaminated areas are reconstructed effectively after the interference mitigation by the proposed method.
The results of the eight methods on the four performance metrics are summarized in Table <ref>. It can be observed from Table <ref> that the four time domain methods (i.e., zeroing, CW, T-AR, and IMAT) contain large reconstruction error as shown in the EVM. Furthermore, their correlations with the target echo are poor as shown in the CS. The reason is that the amount of interference information that can be extracted from the time domain is limited. Compared with the four time domain methods, the wavelet method uses the wavelet coefficients of different resolution layers to suppress interference. In this case, the interference power is decomposed into different components, and the useful signals in those components with less interference are preserved. However, the wavelet method still works in the time domain, and does not make full use of the TF characteristics. Unlike the time domain methods, the STFT-AR, the CFAR-Burg, and the proposed methods perform interference mitigation in the TF domain and, therefore, have the ability to exploit more information for accurately locating the interference and retaining more useful signals. Thus the large CS, the small EVM and ISLR are obtained from the three TF domain methods in the noise-free experiment.
The frequency spectrum, i.e., the range profile, of the recovered signal are shown in Fig. <ref>. All the eight methods can effectively suppress the interference. Among the five time domain methods, the wavelet method has the best sidelobe levels. Although the IMAT method has the lower sidelobe at the target location of 150m, its sidelobe level deteriorates more rapidly at the distant range. The three TF domain methods have better sidelobe levels than those of the time domain methods and are very close to the signal without interference.
§.§ Simulation Results in Different SNR Levels
In this experiment, the interference power is reduced to the same level as the target echo power. This is used for simulating a distant interference scenario and verifying the mitigation performance of the different tested methods for weak interference. Moreover, different SNR level simulations are implemented for evaluating the robust performance of the tested methods. Gaussian white noise with different SNR levels are added into the received signal, and Monte Carlo simulations are repeated for evaluating the statistical performance under specific SNR levels. There are a total of 256 independent noise adding experiments for each SNR level. After all the SNR experiments, the results of the four metrics versus the SNR are shown in Fig. <ref>.
When the SNR is greater than -5dB, for the zeroing, the CW, the T-AR and the IMAT methods, the CS, the EVM, the PSLR and the ISLR of the recovered signal are about 0.8, 0.7, -18, and -3 respectively. It can be found from Fig. <ref> that the wavelet method have achieved better results than the other four time domain methods. Three TF domain methods, namely the STFT-AR, the CFAR-Burg and the proposed methods, achieve the best performance. In this case, the CS is greater than 0.95, the EVM is less than 0.25, the PSLR is less than -32, and the ISLR is less than -15. From these results, the TF domain methods are better than the time domain methods in performance because more information of the interference can be utilized and more accurate interference location can be detected.
When the SNR is low, i.e., smaller than -15dB, the signal recovery performance of all the tested methods is degraded. However, the TF domain methods still maintained advantage over the time domain methods. As shown in Fig. <ref>, in the case of -15dB SNR, the performance of the TF domain methods is still better than that of the time domain methods at high SNR on all the four metrics. In addition, with the decrease of the SNRs, the proposed method has a superiority in the performance and robustness of the interference mitigation to the STFT-AR and the CFAR-Burg methods. For example, when the SNR is -25dB, the CS of the proposed method is about 16% higher than those of the STFT-AR and the CFAR-Burg methods, and achieves a smaller statistical standard deviation in the Monte Carlo simulations. Similar results are observed on the EVM, the PSLR, and the ISLR as shown in Fig. <ref>.
The interference detection results for the STFT-AR, the CFAR-Burg, and the proposed methods in the TF domain under SNR of -5dB are shown in Fig. <ref>. It can be seen that the proposed method has better interference location detection accuracy than that of the STFT-AR and the CFAR-Burg methods. In low SNR conditions, the power-weighted Hough transform is equivalent to the power accumulation along a straight line in the TF domain. As a result, the INR is improved after Hough transform and the interference is detected robustly.
Unlike the proposed method, there is no accumulation of the interference power to improve the INR in the CFAR-Burg method. Thus it encounters false alarms caused by noise in the low SNR conditions. Moreover, in the frequency slice where the target echo is located, a failure to correctly estimate the noise level lead to a missed detection of the interference for the CFAR-Burg method as shown in Fig. <ref> (c). These corner cases cause the interference mitigation degradation and further affect the signal recovery performance.
As for the STFT-AR method, since there is no explicit interference detection and localization process in <cit.>, we manually labeled the interference locations for comparison as shown in Fig. <ref> (b). This is an ideal situation. Therefore, the performance in practice will be worse due to detection errors of the interference locations. Compared with the CFAR-Burg and the proposed methods, the STFT-AR method removes all the frequency bins in a certain time range in the TF domain. This operation causes a loss of useful signal information adjacent to the interference locations, and further lead to a decrease in interference mitigation performance.
§.§ Moving Target Simulation Results
A moving target simulation is used for evaluating the performance of target velocity measurement by chirp sequences before and after interference mitigation. In the simulation, the moving target locates at 100m with a velocity of 11m/s. A total of 256 chirps are set up as a range-Doppler (RD) processing unit. In the evaluation of the interference mitigation performance with multiple chirps, the STFT-AR method is not used because the interference locations vary in each chirp and can not be marked manually.
In the experiment, the interference mitigation process was firstly performed by traversing each chirp to obtain interference-free chirp sequence data, and then the range FFT and the Doppler FFT processing mentioned in Section <ref> was realized on the interference-free chirp sequence data for obtaining RD responses. The RD responses corresponding to the tested methods are shown in Fig. <ref> (c) to Fig. <ref> (i). As a reference, the RD responses under the interference-free and the interference conditions are given in Fig. <ref> (a) and Fig. <ref> (b), respectively. The CFAR-Burg and the proposed method have better interference mitigation effects in the RD responses than those of the time domain methods, i.e. the zeroing, the CW, the T-AR, the wavelet, and the IMAT methods. The reason is that, on the one hand, the interference can be more accurately located in the TF domain, and on the other hand, the CFAR-Burg and the proposed methods utilize the uncontaminated signals to interpolate the contaminated gaps. Thus the two methods maintain the phase coherence of the chirp sequence, and obtain the high target SNR in the RD responses.
Moreover, compared with the false alarms of the CFAR-Burg method as shown in Fig. <ref> (c), the proposed method utilizes the linear structure of the interference in the TF domain and accumulates the interference power in the Hough parameter space for robustly interference detecting, which is able to avoid the false alarms. Therefore, the proposed method has better performance in reconstructing the signal and obtains a higher SNR in the RD response. Clearer results are given in Fig. <ref>. In the RD responses, the range slice of the moving target is extracted for obtaining velocity profiles, then the velocity profiles of the tested methods are shown in Fig. <ref>. Similar to the RD response analysis, the proposed method obtains the best interference mitigation performance in the velocity profiles.
Quantitative results are given in Table <ref>. Since the velocity profile is the output of the Doppler FFT processing, the frequency domain metrics, i.e., the PSLR and the ISLR are used to measure the interference mitigation performance for the tested methods. The PSLRs of the time domain methods are distributed between -30dB and -23dB. Among them, the zeroing, the CW and the T-AR methods have the PSLRs around -25dB. These three methods only perform interference detection in the time domain which has the least amount of interference information. The wavelet method decomposes the time domain signal into components and performs interference detection in the components which can retain some useful signal information. As a result, the wavelet method achieves the PSLR level of -29 dB, which is the best performance among the time domain methods. The IMAT method is a sparse reconstruction method for the time domain signal, and this sparse approach makes it to obtain the PSLR level of -27 dB in the velocity profile. Compared with the time domain methods, the TF domain methods significantly improve the PSLR level. The CFAR-Burg and the proposed methods obtain the PSLR levels of -47dB and -52dB, respectively. Since the false alarms occurred in the CFAR-Burg method are avoided in the proposed method, the proposed method obtains the closest PSLR level to the ground truth. Similar results are obtained in the ISLR metric as shown in Table <ref>.
§ EXPERIMENT RESULTS FOR REAL SCENARIO
In this experiment, real sceario data are collected for verifying the effectiveness of the proposed method. Three 77GHz millimeter-wave radars, from https://www.muniutech.cn/vehicle?category_id=9Muniu Technology Co., Ltd., are used for data collection. Among these radars, one is used as a victim radar, the other two are used as interferer radars. Experimental data of the victim radar is recorded. The device positions in the scenario are shown in Fig. <ref> (a). The interferer radars were set on the left and the right sides relative to the LOS of the victim radar. The distance from the victim radar to the interferer radar one and the interferer radar two were 20m and 30m, respectively. Radar configurations are shown in Table <ref>. Considering the ease of implementation on actual signal processing chips, all the radars were set to have the same sweep time but different sweep bandwidth to generate LFM signals with different slopes. The victim radar was configured as the up-frequency modulation mode with the sweep bandwidth of 300MHz. Interferer radar one was configured as the down-frequency modulation mode with the sweep bandwidth of 300MHz, and interferer radar two was configured as the up-frequency modulation mode with the sweep bandwidth of 500MHz. The pulse repetition time (PRT) of the radars was set to be different to increase the probability of the mutual interference. For a single chirp, the number of sampling points is 400. A window length of 32 is used for the STFT and a step between the sliding windows is set to 4. A signal in each window is applied to 128-point FFT to obtain the TF spectrogram. For a chirp sequence, a total of 128 chirps are included as a coherent processing unit for a RD response.
§.§ Stationary Target Experiment
A corner reflector was placed at 20m in front of the victim radar to simulate a typical strong target. The time domain signal and the TF spectrogram of the received signal are shown in Fig. <ref>. It can be seen that two forms of interference related to the interferer radars are observed in the time domain as shown in Fig. <ref> (a). The interference with short duration is introduced by the interferer radar one, and the one with long duration is introduced by the interferer radar two. Fig. <ref> (b) shows the TF features of the received signal that includes LFM-like interference and the single frequency-like target echo.
The range profile after interference mitigation for all the tested methods are shown in Fig. <ref>. Overall, the TF domain methods achieve better interference mitigation performance in experiment data than that of the time domain methods. The TF domain methods result in a lower noise floor level about -30dB near the target, where the time domain methods have the noise floor level about -25dB.
Fig. <ref> shows the PSLR and the ISLR of the corner reflector in range profile, the quantitative results can be seen that the TF domain methods are superior to the time domain methods in both the PSLR and the ISLR, except the PSLR of the CFAR-Burg method. Unlike the PSLR, which reflects the side lobe level at a certain point, the ISLR reflects the average value of the side lobe level within a certain range, so it is more accurate for evaluating the interference suppression performance. Therefore, even though the PSLR of the CFAR-Burg method is higher than some time domain methods, it still achieved better performance in overall. For the TF domain methods, the proposed method achieves the best performance on the PSLR and the ISLR.
The effects before and after interference localization and mitigation for the three TF domain methods are shown in Fig. <ref>. For the STFT-AR method, the interference location in this experiment is manually marked since no method for interference detection is given in the original literature <cit.>, so it achieves a better interference mitigation effect. However, the performance of the STFT-AR method will be lower than the results in this paper in practice, since the interference detection are not as accurate as manual marking.
For the CFAR-Burg method, two factors affect the interference location detecting as shown in Fig. <ref> (c). One factor is that two adjacent interference signals in the same frequency slice will raise detection thresholds for each other during the CFAR detecting, causing interference missed detection at certain frequencies. The other factor is false alarms caused by the low SNR, which lead to the loss of useful information in the TF domain. Compared with the STFT-AR and the CFAR-Burg methods, the proposed method detects the interference location more accurately in the measured data due to the utilization of the structural information in the TF domain, therefore the best interference mitigation effect is achieved.
§.§ Pedestrian Experiment
A pedestrian walking back and forth at a range of 30m to 40m from the victim radar is used to evaluate the performance of the tested methods in the interference scenario as shown in Fig. <ref> (b). The RD responses are obtained for a coherent processing unit, i.e., the 128 chirps, with 400 time sampling points per chirp, by performing the range FFT and the Doppler FFT with Hamming window <cit.>. The processing flow of the RD responses is similar to the simulation implemented in Section <ref>. For the pedestrian, the RD responses obtained by the tested methods are shown in Fig. <ref>.
In the pedestrian experiment, the presence of a strong target such as the corner reflector makes the difference in power between the target echo and the interference no longer significant, which increases the difficulty of interference detection and localization. In this case, the interference mitigation by the time domain methods, despite the improvement, is not sufficient to achieve the required SNR for pedestrian detecting. Therefore, the time domain methods can not provide an effective measurement of the pedestrian’s range and velocity in the RD response as shown in Fig. <ref> (b) to Fig. <ref> (f). For the CFAR-Burg method, the pedestrian just barely appeared in the RD response after interference mitigation due to the existence of false alarms and interference missed detection problems as shown in Fig. <ref> (g). The interference missed detection is a failure point of the CA-CFAR approach in interference-dense scenarios. When the locations of multiple interference are close to each other in time as shown in Fig. <ref> (c), it causes the CFAR detector to overestimate noise levels, which raises the detection threshold and lead to the missed detection. The same phenomenon of the interference missed detection can be seen in the original literature of the CFAR-Burg method [34]. For the proposed method, the interference can be better mitigated in the pedestrian experiment, resulting in the correct detection of the pedestrian’s range and velocity as shown in Fig. <ref> (h).
§.§ Algorithmic Runtime Analysis
The runtime results of the tested methods are evaluated by using the data from the corner reflector experiment in Section <ref>. For a chirp signal with interference, the interference mitigation process from the tested methods is applied and the corresponding runtime is recorded. The MATLAB version used in the experiment is R2021a and the computer configurations are AMD Ryzen 7 5800H CPU and 16GB DDR4 3200MHz RAM. The runtime results of the tested methods are shown in Table <ref>.
Overall, the TF domain methods have longer runtime than that of the time domain methods because they expand an one-dimension time signal into a two-dimension TF spectrogram, and implement interference mitigation processes in the TF domain. For the STFT-AR method, since there is no interference detection and localization step, its runtime is mainly consumed in the STFT and the signal reconstruction process. For the CFAR-Burg and the proposed methods, due to the presence of the interference detection and localization steps, their runtime have a large increase compared with the STFT-AR method, which indicates the interference detection and localization in the TF domain is the most time-consuming parts of the TF methods. In addition, the Hough transform used in the proposed method detects lines by a search process in a two-dimensional parameter space and therefore has the longest algorithm running time. However, since the search grids of the Hough parameter space are independent of each other, parallel processing can be considered for reducing the runtime in practice.
§ CONCLUSIONS
In this paper, the mutual interference of automotive radars in the TF domain is analyzed. Based on the linear characteristic of the interference in the TF domain, a power-weighted Hough transform interference detection approach is proposed, and then the AR model based predicting is used for interference mitigation. Compared with the existing interference mitigation methods implemented in the time domain, the proposed method has the ability to locate the interference more accurately in the TF domain, and retains more useful signals in the interference mitigation process. Compared with the STFT-AR and the CFAR-Burg methods implemented in the TF domain, the proposed method accumulates interference power based on structural information for improving detection and location performance. As a result, the target echo can be recovered more accurately and robustly under low SNR conditions.
IEEEtran
[
< g r a p h i c s >
]Yanbing Li (M'22) received the M.S. and Ph.D. degrees in signal and information processing from Xidian University, Xi'an, China, in 2009 and 2013, respectively.
He is now an associate professor with the School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, China. His research interests include radar system design, radar signal processing, radar target recognition, and the applications of radar sensing techniques in autonomous driving, intelligent transportation and internet of things.
[
< g r a p h i c s >
]Weichuan Zhang received the M.S. degree in
signal and information processing from the
Southwest Jiaotong University in China and the
Ph.D. degree in signal and information processing
in National Lab of Radar Signal Processing,
Xidian University, China. He is a research
fellow at Griffith University, QLD, Australia.
His research interests include computer vision,
image analysis, and pattern recognition. He is a
member of the IEEE.
[
< g r a p h i c s >
]Lianying Ji received the B.S. degree from the Dalian Maritime University, in 2004, and the Ph.D. degree from the Beijing Institute of Technology in 2009.
He has been with the School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, since 2009. During 2010, He was visitor researcher in China-Singapore Institute of Digital Media, Singapore. He is now the CTO of Beijing Muniu Linghang Technology Company. His technical contributions have been in the area of both biomedical information processing and mmWave radar signal processing.
|
http://arxiv.org/abs/2307.05262v2 | 20230711135128 | Constraints on the Intergalactic and Local Dispersion Measure of Fast Radio Bursts with the CHIME/FRB far side-lobe events | [
"Hsiu-Hsien Lin",
"Paul Scholz",
"Cherry Ng",
"Ue-Li Pen",
"D. Z. Li",
"Laura Newburgh",
"Alex Reda",
"Bridget Andersen",
"Kevin Bandura",
"Mohit Bhardwaj",
"Charanjot Brar",
"Tomas Cassanelli",
"Pragya Chawla",
"Amanda M. Cook",
"Alice P. Curtin",
"Matt Dobbs",
"Fengqiu Adam Dong",
"Emmanuel Fonseca",
"Bryan M. Gaensler",
"Utkarsh Giri",
"Alex S. Hill",
"Jane Kaczmarek",
"Joseph Kania",
"Victoria Kaspi",
"Kholoud Khairy",
"Calvin Leung",
"Kiyoshi W. Masui",
"Juan Mena-Parra",
"Bradley W. Meyers",
"Anna Ordog",
"Aaron B. Pearlman",
"Ziggy Pleunis",
"Masoud Rafiei-Ravandi",
"Mubdi Rahman",
"Scott Ransom",
"Ketan R. Sand",
"Pranav Sanghavi",
"Kaitlyn Shin",
"Kendrick Smith",
"Ingrid Stairs",
"Shriharsh P. Tendulkar",
"Keith Vanderlinde",
"Dallas Wulf"
] | astro-ph.HE | [
"astro-ph.HE"
] |
Paul Scholz
[email protected]
0000-0001-7453-4273]Hsiu-Hsien Lin
Institute of Astronomy and Astrophysics, Academia Sinica, Astronomy-Mathematics Building, No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan
Canadian Institute for Theoretical Astrophysics, 60 St. George Street, Toronto, ON M5S 3H8, Canada
0000-0002-7374-7119]Paul Scholz
Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
Department of Physics and Astronomy, York University, 4700 Keele Street, Toronto, ON MJ3 1P3, Canada
0000-0002-3616-5160]Cherry Ng
Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
0000-0003-2155-9578]Ue-Li Pen
Institute of Astronomy and Astrophysics, Academia Sinica, Astronomy-Mathematics Building, No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan
Canadian Institute for Theoretical Astrophysics, 60 St. George Street, Toronto, ON M5S 3H8, Canada
Canadian Institute for Advanced Research, MaRS Centre, West Tower, 661 University Avenue, Suite 505
Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
Perimeter Institute for Theoretical Physics, 31 Caroline Street N, Waterloo, ON N25 2YL, Canada
0000-0001-7931-0607]D. Z Li
TAPIR, Walter Burke Institute for Theoretical Physics, Mail Code 350-17, Caltech, Pasadena, CA 91125, USA
0000-0002-7333-5552]Laura Newburgh
Department of Physics, Yale University, New Haven, CT 06520, USA
0000-0001-6967-7253]Alex Reda
Department of Physics, Yale University, New Haven, CT 06520, USA
0000-0001-5908-3152]Bridget Andersen
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
0000-0003-3772-2798]Kevin Bandura
Lane Department of Computer Science and Electrical Engineering, 1220 Evansdale Drive, PO Box 6109 Morgantown, WV 26506, USA
Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, WV 26505, USA
0000-0002-3615-3514]Mohit Bhardwaj
Department of Physics, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, 15213, PA, USA
0000-0002-1800-8233]Charanjot Brar
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
0000-0003-2047-5276]Tomas Cassanelli
Department of Electrical Engineering, Universidad de Chile, Av. Tupper 2007, Santiago 8370451, Chile
0000-0002-3426-7606]Pragya Chawla
Anton Pannekoek Institute for Astronomy, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands
0000-0001-6422-8125]Amanda M. Cook
Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
David A. Dunlap Department of Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
0000-0002-8376-1563]Alice P. Curtin
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
0000-0001-7166-6422]Matt Dobbs
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
0000-0003-4098-5222]Fengqiu Adam Dong
Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 Canada
0000-0001-8384-5049]Emmanuel Fonseca
Department of Physics and Astronomy, West Virginia University, P.O. Box 6315, Morgantown, WV 26506, USA
Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, WV 26505, USA
0000-0002-3382-9558]B. M. Gaensler
Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
David A. Dunlap Department of Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
0000-0001-5553-9167]Utkarsh Giri
Department of Physics, University of Wisconsin-Madison, 1150 University Ave, Madison, WI 53706, USA
0000-0001-7301-5666]Alex S. Hill
Department of Computer Science, Math, Physics, & Statistics, University of British Columbia, Okanagan Campus, Kelowna, BC V1V 1V7, Canada
Dominion Radio Astrophysical Observatory, Herzberg Research Centre for Astronomy and Astrophysics, National Research Council Canada, PO Box 248, Penticton, BC V2A 6J9, Canada
0000-0003-4810-7803]Jane Kaczmarek
Department of Computer Science, Math, Physics, & Statistics, University of British Columbia, Okanagan Campus, Kelowna, BC V1V 1V7, Canada
CSIRO Space & Astronomy, Parkes Observatory, P.O. Box 276, Parkes NSW 2870, Australia
0000-0002-3354-3859]Joseph Kania
Department of Physics and Astronomy, West Virginia University, P.O. Box 6315, Morgantown, WV 26506, USA
Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, WV 26505, USA
0000-0001-9345-0307]Victoria Kaspi
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
0009-0005-7115-3447]Kholoud Khairy
Lane Department of Computer Science and Electrical Engineering, 1220 Evansdale Drive, PO Box 6109 Morgantown, WV 26506, USA
Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, WV 26505, USA
0000-0002-4209-7408]Calvin Leung
MIT Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
Department of Physics, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
NHFP Einstein Fellow
0000-0002-4279-6946]Kiyoshi W. Masui
MIT Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
Department of Physics, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
0000-0002-0772-9326]Juan Mena-Parra
Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
David A. Dunlap Department of Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
0000-0001-8845-1225]Bradley W. Meyers
International Centre for Radio Astronomy Research (ICRAR), Curtin University, Bentley WA 6102 Australia
0000-0002-2465-8937]Anna Ordog
Dominion Radio Astrophysical Observatory, Herzberg Research Centre for Astronomy and Astrophysics, National Research Council Canada, PO Box 248, Penticton, BC V2A 6J9, Canada
Department of Computer Science, Math, Physics, & Statistics, University of British Columbia, Okanagan Campus, Kelowna, BC V1V 1V7, Canada
0000-0002-8912-0732]Aaron B. Pearlman
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
Banting Fellow
McGill Space Institute Fellow
FRQNT Postdoctoral Fellow
0000-0002-4795-697X]Ziggy Pleunis
Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
0000-0001-7694-6650]Masoud Rafiei-Ravandi
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
0000-0003-1842-6096]Mubdi Rahman
Sidrat Research, 124 Merton Street, Suite 507, Toronto, ON M4S 2Z2, Canada
0000-0001-5799-9714]Scott Ransom
National Radio Astronomy Observatory, 520 Edgemont Rd, Charlottesville, VA 22903, USA
0000-0003-3154-3676]Ketan R Sand
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
0000-0001-5504-229X]Pranav Sanghavi
Department of Physics, Yale University, New Haven, CT 06520, USA
Lane Department of Computer Science and Electrical Engineering, 1220 Evansdale Drive, PO Box 6109 Morgantown, WV 26506, USA
Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, WV 26505, USA
0000-0002-6823-2073]Kaitlyn Shin
MIT Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
Department of Physics, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
0000-0002-2088-3125]Kendrick Smith
Perimeter Institute for Theoretical Physics, 31 Caroline Street N, Waterloo, ON N25 2YL, Canada
0000-0001-9784-8670]Ingrid Stairs
Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 Canada
0000-0003-2548-2926]Shriharsh P. Tendulkar
Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Mumbai, 400005, India
National Centre for Radio Astrophysics, Post Bag 3, Ganeshkhind, Pune, 411007, India
0000-0003-4535-9378]Keith Vanderlinde
Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
David A. Dunlap Department of Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
0000-0001-7314-9496]Dallas Wulf
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
We study the 10 fast radio bursts (FRBs) detected in the far side-lobe region of the CHIME telescope from 2018 August 28 to 2021 August 31. We find that the far side-lobe events have on average ∼500 times greater fluxes than events detected in CHIME's main lobe. We show that the side-lobe sample is therefore statistically ∼20 times closer than the main-lobe sample. The median dispersion measure (DM) excess, after removing the Galactic disk component using the NE2001 for the free electron density distribution of the Milky Way, of the 10 far side-lobe and 471 non-repeating main-lobe FRBs in the first CHIME/FRB catalog is 183.0 and 433.9 pc cm^-3, respectively. By comparing the DM excesses of the two populations under reasonable assumptions, we statistically constrain that the local degenerate contributions (from the Milky Way halo and the host galaxy) and the intergalactic contribution to the excess DM of the 471 non-repeating main-lobe FRBs for the NE2001 model are 131.2-158.3 and 302.7-275.6 pc cm^-3, respectively, which corresponds to a median redshift for the main-lobe FRB sample of ∼0.3. These constraints are useful for population studies of FRBs, and in particular for constraining the location of the missing baryons.
§ INTRODUCTION
Fast radio bursts (FRBs) are energetic radio signals with millisecond durations <cit.>. Over 600 FRBs have been reported since the first discovery in 2007 <cit.>, and about fifty FRBs have been observed with repetition <cit.>. In addition, about two dozen FRBs have been interferometrically localized to their host galaxies, confirming their extragalactic origin <cit.>.
Nevertheless, the physical mechanism for FRBs is still mysterious <cit.>.
Since FRBs originate from extragalactic distances <cit.>, the fluence distribution gives a hint to understanding the distance distribution and hence the history of the progenitors <cit.>. The fluence distribution of the FRB population follows
N(>S)∝S^α,
where the N(>S) represents the number of FRBs above the fluence of S, and the α is the power-law index. Thus, bright FRBs are important to constrain the power-law at the bright end and to test whether they inhabit a Euclidean Universe, which requires α = -1.5 <cit.>. Recent results show such a consistency <cit.>.
The Canadian Hydrogen Intensity Mapping Experiment (CHIME) not only searches for FRBs <cit.>, but also directly measures the beam patterns with holography <cit.>. The First CHIME/FRB Fast Radio Burst Catalog <cit.>, hereafter Catalog 1, reported 474 non-repeating sources and 62 bursts of 18 repeating FRBs from the CHIME/FRB. The majority of events were detected in the high-sensitivity, and short-exposure main-lobe, while 3 of 474 non-repeating FRBs were detected in the low-sensitivity and long-exposure side-lobes. Since the field-of-view (FoV) of the side-lobes is much broader than the main-lobe, the side-lobe sample is statistically apparently brighter and hence likely closer than the main-lobe samples <cit.>.
In this paper, we study the 10 far side-lobe FRB reported in <cit.>, along with their flux calibration using holographic techniques. In Section <ref>, we use holography data from different sources to flux calibrate the far side-lobe events, finding that they are on average ∼500 times brighter than the main-lobe FRBs. The dispersion measure (DM) provides an upper bound of the distance for the FRB <cit.>. The median DM of the main-lobe non-repeating FRBs from CHIME/FRB, which will be discussed in Section <ref>, is ∼430 pc cm^-3, which yields an upper bound redshift of ∼0.4 in the Euclidean Universe. In Euclidean geometry[If the flux ratio of an isotropically emitting source at two different distances is R, the distance ratio would be 1/√(R).], the nearby events are ∼500 brighter than farther ones <cit.>, and therefore the far side-lobe events are statistically ∼20 times closer than the main-lobe FRBs from CHIME/FRB. In Section <ref>, we compare the DM excesses of the side-lobe sample to the main-lobe sample from Catalog 1, to constrain the local, that is, the degenerate contributions of the Milky Way halo and host galaxy, DMs of both samples and the intergalactic medium (IGM) contributions to the DMs of the main-lobe sample. We then discuss constraints on the intergalactic and local DM contributions for both the far side-lobe and main-lobe FRBs seen by CHIME/FRB. Finally, we summarize and suggest directions for future work in Section <ref>.
§ HOLOGRAPHIC CALIBRATION
In this Section, we use holography data from three continuum sources to flux calibrate the signal-to-noise ratio (S/N) of the far side-lobe FRBs.
§.§ The holography technique
The CHIME telescope is composed of four cylinders
with 256 dual-polarization feeds sampled in 1024 frequency channels spanning 400-800 MHz <cit.>. CHIME/FRB forms 1024 beams along the meridian, with four E-W columns and 256 N-S rows <cit.>.
Holographic techniques have been used to measure the beam shape of the CHIME telescope at the Dominion Radio Astrophysical Observatory (DRAO) <cit.>. Specifically, we track a bright source with the DRAO John A. Galt Telescope for at least four hours around its transit of CHIME <cit.>. A holographic visibility is obtained as the cross-correlation of the measured voltage of a CHIME feed with that of the Galt Telescope; as the equatorially mounted Galt Telescope's response is constant, the CHIME-26m visibility provides a measurement of the CHIME beam shape, for each feed <cit.>. Before they can be used for beam analysis, the holography data undergo several processing steps.
First, the raw visibilities are fringestopped to the location of the calibrator source, removing the interferometric phase associated with the geometric delay between the point source signals arriving at CHIME and the Galt Telescope. This step isolates the complex beam phase and allows for the data to be downsampled in hour angle without decohering due to the rapid fringing in the raw visibilities.
The Galt Telescope is connected to the CHIME correlator by 305 m of LMR-400 coaxial cable. This length of cable causes the signal from the Galt Telescope to arrive at the CHIME correlator with a large delay (in addition to the geometric delay) relative to the signals from the CHIME feeds. As this delay is significant compared to the integration time of a single correlation frame in the CHIME correlator (2.56 μs), the amplitude of the resulting visibilities is suppressed. Moreover, because the geometric delay between CHIME and the 26-m increases as the source transits overhead from east to west, the amount of signal loss varies (monotonically) with time, causing an asymmetry in the data. Using the specifications of the CHIME correlator and knowledge of the geometric delay, this asymmetric decorrelation effect (approximated as a linear function in hour angle) is calculated and corrected for.
Finally, the holography data are regridded, using an inverse Lanczos interpolation scheme, onto a pre-specified grid in hour angle spanning, in the case of this analysis, from -60 to +60 deg, at a resolution of about 0.1 deg.
There is a potential polarization leakage issue at lower declinations and at frequency ranges above 750 MHz that is still being investigated. Additionally, the band at 725-750 MHz is dominated by radio-frequency-interference (RFI). As such we only use the holography data from three bright, high-declination calibrators, Cassiopeia A (CasA), 3C295, and Cygnus A (CygA) at 400-725 MHz for the following analysis. Their flux densities at 400-725 MHz are tens (3C295) to thousands (CasA, CygA) of Jy <cit.>. We list their properties in Appendix Table <ref>. We normalize the raw visibility of each CHIME-26m baseline and each frequency at the meridian. To remove RFI, we cross-correlate the raw visibilities of the same calibrator in the same baseline and the same frequency channel from two different dates. We further mask frequency channels with persistent RFI and resulting in the raw squared visibility, i.e., the beam response of the CHIME feed in that holographic baseline. For each of the 32 12.5 MHz wide frequency subbands, we choose the median value of the beam response in the subband. We stack the data for the four cylinders together by choosing the median value at each subband.
§.§ The Beam Model
We use the cylindrical coordinate in the following analysis by
x = sin(A)cos(a),
z = sin(a),
ρ = arctanx/z,
ρ(f) = ρ (frequency/800 MHz),
where A and a represent the time-dependent azimuth and altitude of the source, respectively. To get rid of the frequency-dependence of the beam size, we convert ρ to ρ(f) by referring to the top of the CHIME band, i.e., 800 MHz. We use the consequent beam-response in terms of frequency and ρ(f) in the following analysis. Figure <ref> shows the beam-response of the three calibrators. Figure <ref> shows the beam-response averaged over 400-725 MHz with the position ρ(f) of the 10 far side-lobe FRBs reported in <cit.>.
We apply the singular value decomposition (SVD) techniques <cit.> to construct the beam model. We use the SVD to decompose the frequency-scaled beam-response into one eigenvalue and two eigenfunctions as
B_f,ρ(f) =∑_n U_fnS_nV_
nρ(f)^⊤,
where B_f,ρ(f) is the beam response in cylindrical coordinates, with scaling the ρ by a factor of frequency/(800 MHz), and for each mode n: U_fn is the eigenfunction in frequency f, S_n is the eigenvalue, and V_nρ(f)^⊤ represents the transpose eigenfunction in ρ(f). Appendix Figure <ref> shows the SVD decomposition of the beam-response of CygA.
We use the first two modes of the B_f,ρ(f) from CygA to reconstruct the beam model, which is shown in Figure <ref>. We compare the beam model with the B_f,ρ from CasA, 3C295, and CygA. Appendix Figure <ref> shows that the residuals (the absolute value of data/model) are generally around order of unity. In other words, the SVD-reconstructed beam of CygA is consistent with the holography data from the other two calibrators at high declination.
lcccccc
The measurements of beam response.
FRB Namea ρb 3cCalibratorsc Bd
(deg) CasA 3C295 CygA
20190125B -9.56 0.0108 0.0107 0.0137 0.0117
20190202B -7.96 0.0114 0.0115 0.0147 0.0125
20190210D 15.60 0.0066 0.0061 0.0063 0.0063
20191104B -17.05 0.0048 0.0047 0.0065 0.0053
20191201B -13.77 0.0083 0.0075 0.0111 0.0090
20191219E 5.67 0.0080 0.0087 0.0091 0.0086
20201105A 17.75 0.0044 0.0039 0.0043 0.0042
20201129A 10.81 0.0101 0.0097 0.0102 0.0100
20210310B 13.37 0.0090 0.0083 0.0090 0.0088
20210810A 16.32 0.0057 0.0052 0.0056 0.0055
aThe TNS name of the 10 far side-lobe events.
bThe best-fit position converted to cylindrical coordinates, which we define in Section <ref>
cThe beam response of the three calibrators at the modeled position, for which we take average over ρ(f) of the far side-lobe events.
dThe averaged beam response of the three calibrators at the modeled position of the far side-lobe events.
§.§ The on-axis S/N of far side-lobe events
We infer that the equivalent S/N of the far side-lobe events if viewed in the main-lobe is
S/N_on-axis = G/B(S/N_obs) = r(S/N_obs),
where S/N_on-axis is the S/N value converted to the main-lobe, G is a geometric factor in the range from 4 to 5 that will be discussed in the next paragraph, B is the averaged beam response given in Table <ref>, S/N_obs is the S/N reported by <cit.>, r is the ratio between the S/N_on-axis and the S/N_obs.
The dynamic spectrum of a far side-lobe event shows the spikes <cit.>, which result from the product of the synthesised beam response <cit.> and the intrinsic spectral profile across frequencies <cit.>. The four E-W beams do not fully cover the flux from a far side-lobe event, as summing up the dynamic spectrum of the four E-W beams still shows the spikes. To understand how many E-W beams are required to fully cover the flux of a far side-lobe event, we consider the separation between the beams in the E-W direction as a geometric effect, in which receivers only partially detect the flux from far side-lobe events. Since there are four beams in the E-W direction, we consider the lower bound of the geometric factor G to be 4. To estimate the upper bound of the geometric factor G, we apply a Fourier transform to the frequency-scaled beam response, fit it with a Gaussian profile, and measure the full width at half maximum (FWHM) as 16.94 meters, which is shown in Figure <ref>. As each cylinder has a width of 20 m and there is a 2 m gap between each cylinder, and therefore the total width of four cylinders is 86 m, we expect that 5 beams in the E-W direction are required to fully cover the flux of a side-lobe event, and therefore the upper bound of the geometric factor G is 5. Hence, we constrain the geometrical factor G to be between 4 and 5.
We combine the S/N_bonsai values from Table <ref>, the beam response B from Table <ref>, the geometric factor G of 4-5, and we show r in terms of decibels[<https://en.wikipedia.org/wiki/Decibel>] (dB) in Table <ref>, where the error is due to the range of the geometric factor G.
The median S/N_on of the far side-lobe events is from 6853 to 8567, compared to a median S/N_bonsai for main-lobe events of 15.0 <cit.>. Thus, the far side-lobe events seen by CHIME/FRB are 456-571 times brighter than the main-lobe events.
§ CONSTRAINTS ON THE INTERGALACTIC AND DEGENERATE DM
In this section, we derive constraints on the DM components along the line-of-sight (LOS) by using the main-lobe and side-lobe samples from CHIME/FRB.
If the observation time is sufficiently long, an FRB survey will detect side-lobe events <cit.>. The median DM excess (i.e., the observed DM with the expected Milky Way disk contribution subtracted) of the non-repeating FRBs from the CHIME/FRB Catalog 1 is 433.9±23.6 and 426.7±25.7 pc cm^-3 <cit.> for the NE2001 and YMW16 model (see Section <ref>), respectively. Based on the relation DM_IGM≃z 1000 pc cm^-3<cit.> with assuming a mean IGM density, the excess DM provides an upper bound redshift of 0.43±0.02 and 0.43±0.03 for the NE2001 and YMW16 model, respectively, in the Euclidean Universe. As we will discuss in Section <ref>, the brighter FRBs would be statistically closer in a Euclidean Universe. After that, we will discuss that the main-lobe and side-lobe samples from the same telescope can be used to constrain on the DM contribution along the LOS.
§.§ The distance of the far side-lobe sample
We argue that the apparently brighter sources are closer. As shown in the Appendix of <cit.>, the average distance of all the sources detected by instrument A with sensitivity S_m will be:
⟨ r⟩ =∫ r N(r) dr/∫ N(r) dr
=∫_0^√(E_max/4π S_m) r^3 ∫_4π r^2 S_m^E_max f(E) dE dr/∫_0^√(E_max/4π S_m) r^2 ∫_4π r^2 S_m^E_max f(E) dE dr
where N(r) is the number of bursts visible to A at a distance r; f(E) is the energy distribution of bursts detected from a single source; includes all the redshift-related evolution and selection effects, which we will discuss later; and E_max is the maximum energy of the bursts.
Assume detector B is K times less sensitive than detector A, and therefore can detect a minimum flux of S_m^'=S_m K.
Then we can write the average distance of the sources detected by B ⟨ r^'⟩ following Equation <ref> but substituting variable r with r^'=r√(K):
⟨ r^'⟩ =1/√(K)∫_0^√(E_max/4π S_m) r^'3∫_4π r^'2 S_m^E_max f(E) dE dr^'/∫_0^√(E_max/4π S_m) r^'2∫_4π r^'2 S_m^E_max f(E) dE dr^'
For a nearby Euclidean universe, with little change of distance-related selection effects for the two instruments A & B, =1, or in the case of ∝ r^n, this term will cancel
in the numerator and denominator and hence
⟨ r^'⟩=⟨ r⟩/√(K).
The ratio of the average distance of the sources detected by detector A and B is proportional to the square root of the relative sensitivities of the two instruments and is independent of the detailed form of the luminosity function. Therefore
⟨ r^'⟩/⟨ r⟩=
√(S_m/S_m^')
Since CHIME side-lobes are ∼500 times less sensitive than the main lobe they will therefore detect bursts on average ∼20 times closer than those detected in the main lobe. Evolution effects over redshift can complicate this simple relationship, but in Section <ref>, we show that the currently known factors have little influence on our distance and DM estimates in the rest of the paper.
§.§ The decomposition of DM
The DM of an FRB accounts for the electron column density along the line-of-sight,
DM = DM_MW,disk + DM_exc,
where DM_MW,disk is the Milky Way contribution from the disk along the line-of-sight and DM_exc is the DM excess of the total DM over the Galactic DM given by the NE2001 or YMW16 model <cit.>. The DM excess is the sum of several components corresponding to different intervening media,
DM_exc = DM_MW,halo + DM_IGM + DM_host/(1+z),
= DM_degen + DM_IGM,
including contributions from the local Milky Way halo <cit.>, the IGM (DM_IGM), and the host at a redshift z <cit.>. As FRBs are of extragalactic origin, the distribution of DM_exc and the redshift z are critical to understanding the missing baryon problem <cit.>. Since DM_MW,halo and DM_host(z) are local and degenerate, we combine them into the degenerate DM (DM_degen).
§.§ The median DM of side-lobe and main-lobe FRBs.
lcccccccccccccc
The properties of far side-lobe FRBs. S/N_on and r are mentioned in Section <ref>; other values are adapted from Table 1 in <cit.>.
FRB Namea 2cS/Nb 2crc RA σ_RAd Dec σ_Dece 2cHAf DMg 2cDM_exch
obs on-axis ratio dB (HH:MM:SS) (^') (DD:MM:SS) (^') (deg) (hrs) (pc cm^-3) 2c(pc cm^-3)
J2000 J2000 total NE2001 YMW16
20190125B 13.2 5014 381^+45_-40 25.8±0.5 14:29:48 7 +49:37:54 6 12.3 0.82 177.9(1) 147.0 154.9
20190202B 20.2 7204 357^+42_-38 25.5±0.5 07:02:19 5 +31:57:30 7 8.6 0.57 464.8(1) 370.6 338.8
20190210D 20.6 14539 706^+84_-74 28.5±0.5 22:17:58 7 +52:53:20 6 -26.0 -1.73 359.3(1) 106.3 84.6
20191104B 14.9 12488 838^+99_-88 29.2±0.5 01:20:37 5 +26:42:05 7 18.1 1.20 192.2(1) 147.1 156.0
20191202A 14.1 7039 499^+59_-52 27.0±0.5 19:51:56 19 +70:49:07 8 42.1 2.81 117.9(1) 50.7 49.7
20191219E 11.1 5749 520^+62_-55 27.2±0.5 21:18:30 8 +55:50:48 7 -10.3 -0.69 736.7(1) 503.0 336.2
20201105A 10.7 11430 1064^+126_-112 30.3±0.5 02:42:18 5 +14:23:13 7 -15.6 -1.04 262.4(1) 218.9 226.0
20201129A 16.4 7317 447^+53_-47 26.5±0.5 07:52:17 7 +53:16:55 6 -17.8 -1.19 274.6(1) 219.6 221.8
20210310B 15.7 8000 510^+60_-54 27.1±0.5 13:42:21 5 +35:33:46 6 -16.7 -1.12 135.5(2) 110.6 115.1
20210810A 45.2 36735 813^+96_-85 29.1±0.5 15:17:55 5 +32:09:24 6 -18.9 -1.26 246.9(1) 223.4 223.3
aThe TNS name for FRBs.
bS/N_obs is the observed S/N reported by , the real-time detection pipeline. S/N_on-axis is the S/N value converted to the main lobe.
cThe ratio between S/N_on-axis and S/N_obs, which we describe in Section <ref>. It is presented as both a dimensionless ratio and in dB, where e.g. 30 and 40 dB correspond to 1000 and 10000 in terms of the linear ratio, respectively. The central, upper, and lower value corresponds to that using the the logarithmic mean (i.e., 6.32), the lower (4), and upper bound (10) of the geometrical factor G, respectively.
d,e The 90%-confidence uncertainty in units of minutes of arc on the sky reported from <cit.>.
fThe hour angle of the side-lobe event, which is relative to the meridian.
gThe total DM of the side-lobe events, which is reported by offline algorithms via maximization of the S/N of the burst <cit.>.
hThe last two columns show the DM excess after subtracting the Galactic contribution from the NE2001 and YMW16 models <cit.>, respectively.
In the first CHIME/FRB catalog, there are 474 non-repeating events, including three far side-lobe events <cit.>. We use the cataloged DM_exc values, derived by fitburst, of the 471 non-repeating and main-lobe events (hereafter: main-lobe FRBs) for the following analysis.
The 10 far side-lobe FRBs generally have lower DM than the 471 main-lobe FRBs, with the median main-lobe DM_exc of 433.9±23.6 and 426.7±25.7 pc cm^-3 <cit.> for the NE2001 and YMW16 model, respectively. We use the bootstrap method with replacement with 10^5 repetitions to measure the 68% confidence interval on the median of each sample.
We list DM_exc values and their uncertainties given each of the two Galactic electron density models in Table <ref>.
lcc
The median value of DM_exc.
Modela DM_exc, side-lobeb DM_exc, main-lobec
(pc cm^-3) (pc cm^-3)
NE2001 183.0±43.6 433.9±23.6
YMW16 188.9±40.2 426.7±25.7
aWe subtract the DM given by the NE2001 and YMW16 models from the total DM.
bThe median DM_exc value and the the 68% confidence interval of the 10 far side-lobe FRBs.
cThe median DM_exc value and the the 68% confidence interval of the 471 main-lobe FRBs.
§.§ The intergalactic medium DM
As shown in Equation <ref>, we decompose DM_exc into two components. First, there are contributions from the Milky Way halo and the host, which are degenerate and therefore we call the degenerate DM (DM_degen). Second, there is the intergalactic medium DM (DM_IGM). We include a distance scaling factor of R to distinguish the DM_IGM contribution between the far side-lobe and main-lobe FRBs. We assume the same DM_degen for both populations. Furthermore, we assume that DM_host is the same in the nearby and distant Universes. Note that if there were two populations of FRBs, with different luminosity functions, the young and old population could have on average different DM_host.
DM_exc, main-lobe = DM_degen + DM_IGM
= DM_MW,halo + DM_host/(1+z) + DM_IGM,
DM_exc, side-lobe = DM_degen + DM_IGM/R
= DM_MW,halo + DM_host/(1+z)+ DM_IGM/R
≈ DM_MW,halo + DM_host/(1+z).
In Equation <ref>, since R is much larger than 1, the implication is that the DM_IGM of DM_exc, side-lobe is negligible and hence DM_exc, side-lobe is approximately equal to DM_degen.
As we discussed in Section <ref>, the far side-lobe events are about 21-23 times closer than the main-lobe events. Hence, R is equal to 21-23. Since we assume that DM_IGM is linearly proportional to the distance, the DM_IGM values of the far side-lobe events are therefore 21-23 times lower than those of the main-lobe events. Note that the ∼10% uncertainty in the distance difference only makes a ∼0.5% systematic error in the DM_IGM estimation, which we will discuss in Section <ref>.
To probe the effect of the degeneracy between redshift-dependent DM_host and redshift-independent DM_MW,halo, we start by considering two extreme scenarios where we assume that either DM_host or DM_MW,halo is zero. Under these assumptions, equations <ref> and <ref> are further simplified as follows:
* Scenario 1: assume DM_host=0
DM_exc, main-lobe = DM_MW,halo + DM_IGM
≈ DM_exc, side-lobe+ DM_IGM,
DM_exc, side-lobe ≈ DM_MW,halo.
* Scenario 2: assume DM_MW,halo=0 and z=0.3 for the host
DM_exc, main-lobe = DM_host/(1+z) + DM_IGM
≈ kDM_exc, side-lobe+ DM_IGM.
DM_exc, side-lobe ≈ DM_host,
where we have introduced the redshift factor k=1/(1+z). Hence, the two scenarios can both be expressed by Equation <ref>, with k=1 for Scenario 1 and a redshift-dependent k of 1/1.3 for Scenario 2, as shown in Table <ref>.
In the following paragraphs, we discuss the measurement of
DM_IGM and DM_degen. Since the two scenarios are extreme cases, eventually we present Scenario 3, which is a linear combination of Scenarios 1 and 2.
To measure DM_IGM for the above scenarios under the assumption that they are related by Equation <ref>, we perform the Kolmogorov–Smirnov (K-S) test[The Anderson–Darling (A-D) test and the Kolmogorov–Smirnov (K-S) test are broadly used to probe whether two samples are underlying the same statistical distribution. The former one involves summing the difference of the two cumulative distribution functions (CDFs) among all samples, while the latter one only considers the maximal different CDFs between two samples. From the perspective of robustness, the K-S test is more appropriate for an analysis with median values, and the A-D test is more appropriate for an analysis with mean values. The mean value is sensitive to outliers, while the median is not sensitive for those outliers. Since we only have 10 far side-lobe samples, we are using the median in the analysis to avoid the outlier scenarios.] to compare two samples: the measured DM_exc of the 471 main-lobe FRBs, and the DM_exc of the 10 side-lobe FRBs, scaled by the redshift factor k, plus a set of trial DM_IGM values. These values are drawn from
DM_IGM = V^1/3 DM_max; V∈ [0,1],
where DM_max are evenly spaced 0.1 pc cm^-3 that range from 0 to 1000 pc cm^-3. This range of DM_max should properly cover the range of DM_IGM as DM_exc, main-lobe is 433.9 and 426.7 pc cm^-3 for the NE2001 and YNW16 model, respectively. V^1/3 represents the volume in the Euclidean distribution <cit.>, and V are a set of 10 random values between 0 and 1 drawn for each of the side-lobe events. The same set of 10 values of V are used at each step of DM_max. These values are used to introduce additional variance in the distance distribution of the statistically small side-lobe sample.
As we discussed in Section <ref>, the difference of the median DM_exc between the 10 far side-lobe FRBs and the main lobe FRBs is a few hundred. Hence, we use DM_max as a free parameter to probe the DM_exc difference between the DM_exc of the main-lobe samples and k times the DM_exc of the side-lobe samples.
Here we show the steps of the analysis for Scenario 1 (DM_host=0, k=1). Figure <ref> shows the CDF comparison between the DM of the main-lobe FRBs and the DM of the 10 far side-lobe FRBs with the additional DM_IGM values (i.e., by using Equation <ref> with the same 10 random V values on various values of DM_max). For each step of DM_max, we measure dCDF as the difference between the CDF of the DM_max-shifted DM_exc, side-lobe distribution and the DM_exc, main-lobe distribution. We perform the K-S test[Since the CDF is monotonic, one needs to take the maximal CDF differences across all bins instead of just at the bin edge. Thus, we compare two CDF values and find the maximal dCDF value for each side-lobe FRB, one for |CDF(DM)|, another for |CDF(DM)+1/10| as there are 10 side-lobe FRBs, where CDF(DM) is the CDF value at the given DM of the side-lobe FRB.] to determine the best-fit DM_IGM. Figure <ref> shows that we measure the maximum value of |dCDF| and |dCDF +1/10| at each step of DM_max which results in the K-S curve, where we determine the best-fit DM_IGM when the K-S curve reaches the minimal value, which yields the p-value of 0.347 and 0.297 for the NE2001 and YMW16 model, respectively. In Table <ref>, we list the resulting expectation values of DM_IGM for the NE2001 and the YMW16 models: ⟨DM⟩_IGM, NE2001 = 260.3 and ⟨DM⟩_IGM, YMW16 = 266.0 pc cm^-3 for Scenario 1.
We follow the same procedure for Scenario 2 (i.e., k=1/1+z) for the redshift values z=0.2, 0.3, 0.4, and 0.5. All four cases result in DM_IGM∼300 pc cm^-3 (DM_IGM only changes by ∼10 pc cm^-3 between z=0.3 and the three other cases). Only the z=0.3 case satisfies the relation DM_IGM≃z 1000 pc cm^-3<cit.>, and is thus self-consistent. We therefore present only the z=0.3 (k=1/1.3) case in Table <ref>.
Scenario 1 results in a high DM_MW,halo, ∼160-170 pc cm^-3, that is not borne out by current limits <cit.>, and evidence from other tracers of halo gas disfavour the DM_MW,halo=0 of Scenario 2 <cit.>.
Here we propose Scenario 3, which is a linear combination of Scenarios 1 and 2. <cit.> suggests the upper limit of DM_MW,halo is in the range of 52-111 pc cm^-3. We therefore consider 0-111 pc cm^-3 for the range of DM_MW,halo, where the lower bound corresponds to Scenario 2 (i.e., none of the DM_MW,halo contributes to the DM_degen). This range corresponds to 0-64% of DM_degen (i.e., 173.6 pc cm^-3) for the NE2001 model in Scenario 1 (i.e., the DM_degen is 100% contributed by DM_MW,halo), and subsequently 100-36% of DM_host (i.e. Scenario 2 that DM_degen is 100% contributed by the DM_host) given by Equation <ref>. Using a linear combination with the range of ratios, we find that DM_degen = 131.2-158.3 and DM_IGM = 302.7-275.6 pc cm^-3 for the NE2001 model of Scenario 3a. We follow the same procedure for the YMW16 model (i.e., Scenario 3b) and we find DM_degen = 104.9-143.4 and DM_IGM = 321.8-283.3 pc cm^-3. Note that the DM_IGM difference between Scenario 2 and 3a and 3b is less than 1σ, based on the uncertainty that we will discuss in Section <ref>.
For the NE2001 model in Scenario 3a, the expectation value of DM_IGM is 302.7-275.6 pc cm^-3. Based on the relation DM_IGM≃z 1000 pc cm^-3<cit.>, the median z of the main-lobe samples of CHIME/FRB Catalog 1 is ≃ 0.30-0.28. Though z=0.3 was an input into our analysis through the k=1/(1+z) scaling, the resulting DM_IGM∼300 pc cm^-3 was found for z=0.2, 0.4, 0.5 as well, so it is somewhat independent of this assumption. Since the median DM_exc of the main-lobe sample is 433.9 pc cm^-3 (see Table <ref>), the IGM must statistically contribute ∼(302.7-275.6)/433.9=70-64% of DM_exc for the main-lobe sample.
lccccccc
The best-fit DM_IGM and the derived DM_degen values for different cases.
scenarios 2cassumptions of DM_degena k factorb 2c⟨DM⟩_IGMc 2c⟨DM⟩_degend
DM_MW,halo DM_host NE2001 YMW16 NE2001 YMW16
(pc cm^-3) (pc cm^-3) (pc cm^-3) (pc cm^-3)
1 100% 0 % 1 260.3±40.1^+0.6_-0.5 266.0±30.8^+0.6_-0.5 173.6±49.4 160.7±45.5
2 0% 100% at z=0.3 1/1.3 302.7±31.1^+0.6_-0.5 321.8±23.4^+0.6_-0.5 131.2±49.4 104.9±45.5
3ae 0-64% 100-36% at z=0.3 - 302.7-275.6 - 131.2-158.3 -
3bf 0-69% 100-31% at z=0.3 - - 321.8-283.3 - 104.9-143.4
aThe three different scenarios that we discussed as assumptions in Section <ref>.
bThe k factor is defined as 1/(1+z), where z is the redshift of the host.
cFor Scenario 1 and 2, the best-fit DM_IGM from the K-S test that we discuss in Section <ref>. The statistical and systematic uncertainties are discussed in Section <ref>. Note that the DM_IGM values of the two Galactic electron density models are comparable within their errors. In Table <ref> and applying Equation (<ref>), we find that DM_IGM = DM_exc, main-lobe - DM_exc, side-lobe = ∼250±25 and ∼240±24 pc cm^-3 for the NE2001 and YMW16 models, respectively. The corresponding values of DM_IGM for Scenario 1 in Table <ref> are derived from the K-S test rather than directly from Equation <ref>, and the resulting NE2001 and YMW16 values are consistent within their uncertainties. For Scenario 3a and 3b, we give the range corresponding to the bounds of the assumptions of DM_degen.
dThe derived DM_degen that we discuss in Section <ref>. The uncertainty for Scenario 1 and 2 is discussed in Section <ref>.
eFor the NE2001 model, we consider Scenario 3a as a combination of 0-64% Scenario 1 and 100-36% Scenario 2.
eFor the YMW16 model, we consider Scenario 3b as a combination of 0-69% Scenario 1 and 100-31% Scenario 2.
§.§ Estimating the uncertainty in
⟨DM⟩_degen and ⟨DM⟩_IGM
Since we do not know the probability density distributions of the DM_degen and DM_IGM, we apply the bootstrap method, a non-parametric approach of probing the statistical distribution, to determine their statistical uncertainties <cit.>.
We apply the bootstrap method with 10^5 repetitions to estimate the statistical uncertainties of ⟨DM⟩_degen and ⟨DM⟩_IGM. For each repetition, the steps are
* define dm0 array: We randomly choose 10 values from the 10 DM_exc, side-lobe values,
* define dm1 array: To simulate 471 values of DM_exc, side-lobe, we randomly choose 471 values from dm0, because there are 471 samples in the DM_exc, main-lobe,
* define dm2 array: To simulate 471 values of DM_IGM, we subtract k×dm1 (where k is the redshift factor as discussed in Section <ref>) from the 471 values of DM_exc, main-lobe.
Finally, we measure the median value of the dm1 and dm2 arrays at each iteration. We measure the standard deviation of the median distribution of the dm1 array for DM_degen and the dm2 array for DM_IGM. For Scenario 3a with the NE2001 model, the 68% confidence interval of DM_degen and DM_IGM is 35.0 and 25.0 pc cm^-3, respectively. For the other Scenarios that we discussed in Section <ref>, we summarize the expectation values and 68% confidence intervals of ⟨DM⟩_degen and ⟨DM⟩_IGM in Table <ref>.
The statistical error of ⟨DM⟩_degen is larger than the statistical error of ⟨DM⟩_IGM due to the limited sample of 10 side-lobe FRBs compared to the 471 main-lobe FRBs. In addition, the 10 far side-lobe FRBs have a bimodal distribution of DM_exc, where two[20190202B and 20191219E] of the DM_exc are much larger than the others.
We analyze the systematic error of ⟨DM⟩_IGM, which is caused by the uncertainty of the distance scaling factor R (i.e., 21-23) in Equation <ref>. According to Equations <ref> and <ref>,
DM_exc, main-lobe - DM_exc, side-lobe
= (1-1/R)DM_IGM,
DM_IGM(R) = R/R-1(DM_exc, main-lobe
- DM_exc, side-lobe).
We apply values of R of 21, 22, and 23 with the median values of DM_exc, main-lobe and DM_exc, side-lobe as shown in Table <ref>, and measure the corresponding DM_IGM(R) to estimate the systematic errors as shown in Table <ref>. We find that the systematic error of ⟨DM⟩_IGM is only ∼0.5 pc cm^-3, i.e., less than 1% of the expectation value. The systematic error of ⟨DM⟩_degen is of the same order as the systematic error of ⟨DM⟩_IGM shown in Table <ref>, which is much smaller than the statistical error of ⟨DM⟩_degen and hence is negligible.
§.§ The degenerate DM
We now discuss the constraints on DM_degen. We measure DM_degen from the difference between DM_exc and DM_IGM (Equation <ref>), and its error from the dm1 array as outlined in Section <ref>.
For Scenario 3a and 3b, the corresponding expectation values of DM_degen for the NE2001 and YMW16 models are ⟨DM⟩_degen, NE2001 = 131.2-158.3 and ⟨DM⟩_degen, YMW16 = 104.9-143.4 pc cm^-3, respectively.
<cit.> proposed a log-normal distribution
for DM_host, and <cit.> used the joint rate distribution of fluence and DM to predict the DM-S/N distribution in the CHIME/FRB Catalog 1. With an assumption that the Milky Way DM (DM_MW, disk+DM_MW, halo) is 80 pc cm^-3, the model depends on the following parameters, including the volumetric rate of FRBs in the comoving frame (ϕ_0), the characteristic exponential cutoff energy (E_char), the differential power-law index (γ), the spectral index (α), whether the FRB population redshift evolution with the star-forming history (n), the expected value and standard deviation of log(DM_host×(1+z)) as μ_host and σ_host. Through a Markov Chain Monte Carlo (MCMC) analysis, the best-fit median and standard deviation DM_host found by <cit.> were 84^+69_-49 pc cm^-3 and 174^+319_-128 pc cm^-3, respectively. This is consistent with our measurement that ⟨DM⟩_degen = ⟨DM⟩_MW, halo + ⟨DM⟩_host= 131.2-158.3 and 104.9-143.4 pc cm^-3 (for Scenario 3a and 3b) as shown in Table <ref>.
The DM_degen of at least some repeating FRBs is substantially dominated by the host. For instance, in addition to the halo contribution, the inferred DM_host of FRB 20190520B is 903^+72_-111 out of DM_exc of ∼1200 pc cm^-3, and the inferred DM_host of FRB 20121102A is between 55 and 255 out of DM_exc of ∼560 pc cm^-3 <cit.>. For the NE2001 model, our result shows that 72-87% and 30-36% of DM_exc for the side-lobe (183.0 pc cm^-3) and main-lobe (433.9 pc cm^-3) non-repeating FRBs are approximately equal to the DM_degen (131.2-158.3 pc cm^-3, considering Scenario 3a), respectively. Thus, DM_degen makes a significant contribution to the excess DM for both repeating and non-repeating FRBs.
§.§ The influence of redshift evolution
Redshift and redshift-related evolution effects can influence our estimates of the distance of FRBs detected with different sensitivity thresholds (Equation <ref>). Here we show that the currently known factors have little influence on our distance and DM estimates.
In Equation <ref>, we use to describe the effects of redshift and redshift-related evolution. One common way to parameterize is =(1+z(r))^n, where z is the redshift. For the nearby universe, we can approximate z=rH/c, where H is the Hubble constant and c is the speed of light. For z≪1, ≈1 and for z≫1, ≈ z^n.
For the majority of CHIME-detected FRBs, z should be between 0 and 1 <cit.>, and the side-lobe detections, z≪1. Since is less than for n>0, </√(K). Therefore, the effect of redshift evolution is to make the side-lobe events closer than our estimation. When n<0, on the other hand, the average distance of the side-lobe events will be farther away than our estimates from the sensitivity ratio.
There are several factors that can influence the value of n. The redshifts of the bursts will lead to a term Φ_z (r)=(1+z)^-2-α, where α is the spectral index in F∝ν^α. In <cit.>, the best-fit α from the CHIME main lobe detection is -1.4^+0.9_-1.2, and in this case this term will have small redshift dependence and only influence our estimate a little.
Additionally, if the burst rate follows the star-formation rate, there will be an additional term Φ_SF≈ (1+z)^2.7 for z<1. In this case, as discussed above, the side-lobe events would be closer than our estimates. In Section <ref>, we use a statistical approach to estimate the contribution to the DM from the FRB host galaxy. Making the sidelobe events closer will not affect these estimates of host DMs, because side-lobe events have negligible intergalactic medium (IGM) DM (see Section <ref>). There are other redshift-related uncertainties that can be included in . For example, when there are multiple populations of FRBs with different redshift dependencies, the E_max can also vary with respect to redshift. We will defer the discussion until we have more clues about this scenario.
Selection effects can also influence our statistical DM estimates. For example, as shown in <cit.>, CHIME/FRB will miss twice as many events having DM ≃ 100 pc cm^-3 compared with DM ≃ 500 pc cm^-3. This can lead to the observed far side-lobe events biased towards larger DM, and hence larger distance. However, the change of DM selection function is smooth near the observed DM range of side-lobe events, therefore its influence on is not big.
On the other hand, the selection effects on the burst width are large, with bursts longer than 10 ms rarely detected. For specific scenarios, such as FRBs scattered by foreground galactic halos, the scattering time of bursts can be largely related to their distances, resulting in a large fraction of bursts with z>1 undetected. The estimate of from the observed events will then be much lower than the actual value, and this can result in a substantial underestimate of the DM of the side-lobe events. However, in this halo scattering scenario, there will be a strong correlation between the observed DM and the scattering time, which is not seen in current FRB samples <cit.>. Therefore, we do not expect this to significantly effect out results.
As discussed in Section <ref>, the far side-lobe events are 456-571 times brighter than the main-lobe events. Hence, the far side-lobe events are correspondingly 21-23 times closer than the main-lobe events. And as we discussed in Section <ref>, the ∼10% uncertainty on the distance difference only make a ∼0.2% systematic error in the DM contribution from the intergalactic medium.
§ CONCLUSIONS
We study 10 far side-lobe Fast Radio Bursts (FRBs) detected by CHIME/FRB as reported in <cit.>. We flux calibrate the far side-lobe events with holography data, which shows that the far side-lobe events are ∼500 times brighter, and thus ∼20 times closer, than the main-lobe events presented in CHIME/FRB's first catalog. We perform a K-S test on the DM excess (DM_exc) of the far side-lobe and main-lobe FRBs considering different scenarios of DM_MW,halo and DM_host. In the scenario that DM_degen, the DM contributions of the Milky Way halo and the host galaxy, is a linear combination of 30-64% DM_MW,halo and 70-36% DM_host at z=0.3 (i.e., Scenario 3a for the NE2001 model in Section <ref> and Table <ref>), we find that the DM contributions from the IGM (DM_IGM) and the degenerate Milky Way halo plus FRB host components (DM_degen) for the main-lobe events with the NE2001 model are ⟨DM⟩_degen, NE2001 = 131.2-158.3 and ⟨DM⟩_IGM, NE2001 = 302.7-275.6 pc cm^-3, respectively. A large sample of nearby FRBs may be useful to understand the difference between DM_MW, halo and DM_host. The constraints on the DM distribution are a key to understanding the FRB population, especially constraining the distribution of the missing baryons <cit.>.
Since the side-lobe samples are statistically from the nearby Universe, they are extremely interesting candidates for follow-up. The localization of progenitors of FRBs is the key to understanding the central engine of FRBs <cit.>. The upcoming CHIME/FRB outrigger stations, with milli-arcsecond precision of localization, will be able to pinpoint future far side-lobe events to hosts in the nearby Universe <cit.>. In addition, the next generation FRB survey BURSTT will equip a large FoV (∼10^4 deg^2) with the VLBI capacity, which is conceptually similar to CHIME/FRB's side-lobe capability <cit.>. Hence, CHIME/FRB with outriggers and BURSTT may detect and localize a large sample of apparently bright FRBs in the nearby Universe and shed a light on their origin.
Multiwavelength observations will be very interesting for those bright and nearby FRBs in the future, since different models predict multiwavelength counterparts <cit.>. For instance, X-ray and gamma-ray counterparts were simultaneously detected along with the radio burst from the SGR 1935+2154 event in 2020 <cit.>. Since the majority of far side-lobe events statistically comes from the nearby Universe, multiwavelength observations will provide more opportunities to detect counterparts <cit.>, which will open a new window for understanding the nature of FRBs.
We acknowledge that CHIME is located on the traditional, ancestral, and unceded territory of the syilx/Okanagan people. We thank the Dominion Radio Astrophysical Observatory, operated by the National Research Council Canada, for gracious hospitality and expertise. CHIME is funded by a grant from the Canada Foundation for Innovation (CFI) 2012 Leading Edge Fund (Project 31170) and by contributions from the provinces of British Columbia, Quebec and Ontario. The CHIME/FRB Project is funded by a grant from the CFI 2015 Innovation Fund (Project 33213) and by contributions from the provinces of British Columbia and Quebec, and by the Dunlap Institute for Astronomy and Astrophysics at the University of Toronto. Additional support was provided by the Canadian Institute for Advanced Research (CIFAR), McGill University and the McGill Space Institute via the Trottier Family Foundation, and the University of British Columbia. The Dunlap Institute is funded through an endowment established by the David Dunlap family and the University of Toronto. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research & Innovation. FRB research at UBC is supported by an NSERC Discovery Grant and by the Canadian Institute for Advanced Research. FRB research at WVU is supported by an NSF grant (2006548, 2018490). Computations were performed on the Niagara and Cedar supercomputers at the SciNet HPC Consortium <cit.>. SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; the Ontario Research Fund - Research Excellence; and the University of Toronto.
P.S. is a Dunlap Fellow.
Ue-Li Pen receives support from Ontario Research Fund—research Excellence Program (ORF-RE), Natural Sciences and Engineering Research Council of Canada (NSERC) [funding reference number RGPIN-2019-067, CRD 523638-18, 555585-20], Canadian Institute for Advanced Research (CIFAR), Canadian Foundation for Innovation (CFI), the National Science Foundation of China (Grants No. 11929301), Thoth Technology Inc, Alexander von Humboldt Foundation, and the Ministry of Science and Technology(MOST) of Taiwan(110-2112-M-001-071-MY3). Computations were performed on the SOSCIP Consortium’s [Blue Gene/Q, Cloud Data Analytics, Agile and/or Large Memory System] computing platform(s). SOSCIP is funded by the Federal Economic Development Agency of Southern Ontario, the Province of Ontario, IBM Canada Ltd., Ontario Centres of Excellence, Mitacs and 15 Ontario academic member institutions.
L.B.N. is supported by NSF Grant No. 2006911.
MB is a Mcwilliams Fellow and an International Astronomical Association Gruber fellow.
B. C. A. is supported by an FRQNT Doctoral Research Award.
A.M.C. was supported by an Ontario Graduate Scholarship.
A.P.C. is a Vanier Canada Graduate Scholar.
M.D. is supported by a CRC Chair, NSERC Discovery Grant, CIFAR, and by the FRQNT Centre de Recherche en Astrophysique du Québec (CRAQ).
F.A.D is funded by the U.B.C Four Year Fellowship.
B.M.G. is supported by an NSERC Discovery Grant (RGPIN-2022-03163), and by the Canada Research Chairs (CRC) program.
A.S.H. is supported by an NSERC Discovery Grant.
V.M.K. holds the Lorne Trottier Chair in Astrophysics & Cosmology, a Distinguished James McGill Professorship, and receives support from an NSERC Discovery grant (RGPIN 228738-13), and from the FRQNT CRAQ.
C.L. was supported by the U.S. Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program.
K.W.M. holds the Adam J. Burgasser Chair in Astrophysics and is supported by an NSF Grant (2008031).
A.O. is supported by the Dunlap Institute.
A.B.P. is a Banting Fellow, a McGill Space Institute (MSI) Fellow, and a Fonds de Recherche du Quebec – Nature et Technologies (FRQNT) postdoctoral fellow.
Z.P. is a Dunlap Fellow.
The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. S.M.R. is a CIFAR Fellow and is supported by the NSF Physics Frontiers Center award 2020265.
K.R.S acknowledges support from FRQNT Doctoral Research Award.
K.S. is supported by the NSF Graduate Research Fellowship Program.
FRB research at UBC is supported by an NSERC Discovery Grant and by the Canadian Institute for Advanced Research. The CHIME/FRB baseband system is funded in part by a CFI JELF award to I.H.S.
S.P.T. is a CIFAR Azrieli Global Scholar in the Gravity and Extreme Universe Program.
CHIME, DRAO-26 m
astropy <cit.>,
numpy <cit.>,
matplotlib <cit.>
§ THE CALIBRATORS FOR THE HOLOGRAPHY ANALYSIS
Appendix Table <ref> shows the calibrators for the holography analysis in Section <ref>.
lccccccc
The calibrators for the holography analysis.
Source RAa DECb S(ν=725 MHz)c S(ν=600 MHz)d S(ν=400 MHz)e Date1 Date2
CasA 23h23m27.94s +58d48m42.4s 2903.4±6.0 3343.8±6.8 4534.2±9.6 2018 Sep 29 2019 Dec 13
3C295 14h11m20.519s +52d12m09.97s 37.3±0.1 42.3±0.1 54.2±0.1 2019 Mar 02 2019 Mar 18
CygA 19h59m28.3566s +40d44m02.096s 3053.2±6.5 3626.1±8.1 5103.5±16.3 2018 Oct 17 2018 Oct 23
a-b The right ascension and the declination of the calibrator (J2000), taken from the NASA/IPAC Extragalactic Database <cit.>.
c-e The flux density (Jy) is at 725, 600, and 400 MHz, which we infer with the polynomial function and parameters in <cit.>.
§ THE SVD ANALYSIS OF BEAM MODEL
Appendix Figure <ref> shows the SVD decomposition of the holography data B_f,ρ(f) of CygA, from which we use the first two modes to reconstruct the beam model as shown in Figure <ref>. In addition, Appendix Figure <ref> shows the residuals (the absolute value of data/model) of different calibrators, which are generally around order unity, i.e., the SVD-reconstructed beam model of CygA is
in agreement with the holography data from the other two calibrators
at different declination. Hence, our S/N calibration is independent of the three calibrators.
aasjournal
|
http://arxiv.org/abs/2307.07200v1 | 20230714073059 | Reproducing the Velocity Vectors in the Listening Region | [
"Jiarui Wang",
"Thushara Abhayapala",
"Jihui Aimee Zhang",
"Prasanga Samarasinghe"
] | eess.AS | [
"eess.AS"
] |
[
[
August 12, 2023
===================
This paper proposes a sound field reproduction algorithm based on matching the velocity vectors in a spherical listening region. Using the concept of sound field translation, the spherical harmonic coefficients of the velocity vectors in a spherical region are derived from the desired pressure distribution. The desired pressure distribution can either correspond to sources such as plane waves and point sources, or be obtained from measurements using a spherical microphone array. Unlike previous work in which the velocity vectors are only controlled on the boundary of the listening region or at discrete sweet spots, this work directly manipulates the velocity vectors in the whole listening region, which is expected to improve the perception of the desired sound field at low frequencies.
§ INTRODUCTION
Spatial sound field reproduction aims to synthesize the desired sound field in the listening region. In most cases, the sound field is characterized by the pressure distribution. Pressure based methods include matching the pressure at a number of sweet spots, wave field synthesis <cit.> and higher order Ambisonics <cit.>. However, a higher accuracy in reproduced pressure does not guarantee satisfactory perception.
Velocity vectors are believed to be related to human's perception of sound at low frequencies and have been applied to reproduction at sweet spots. In <cit.>, velocity vectors were used in the basic Ambisonic decoding to reproduce the impression of the original sound at frequencies below 500 Hz. Gerzon also claimed that velocity vectors are essential to the localization at frequencies below 700 Hz <cit.> and proposed the r_V vector used in Ambisonics <cit.>. A time-domain method that jointly controls the velocity and the pressure at multiple sweet spots was derived in <cit.>. To ensure the listeners can move beyond the sweet spots, the velocity vectors in the whole listening region need to be characterized.
Based on the concept of boundary control, a reproduction method based on matching the velocity vectors at discrete control points on the boundary of the listening region was proposed in <cit.>. Similar ideas were proposed in <cit.> and <cit.>, where the sound pressure and the velocity vectors were controlled on the boundaries of multiple listening zones. The methods in <cit.> and <cit.> require a large number of loudspeakers, which is not suitable for home theatre and small exhibition space. Moreover, measuring the velocity vectors at multiple control points requires complicated setup.
In <cit.>, the spherical harmonic (SH) coefficients of the velocity vectors were derived using the SH coefficients of the desired pressure. The SH coefficients of the desired pressure can be obtained by using a spherical microphone array <cit.>, which eliminates the need of measuring the velocity vectors at multiple control points. The SH coefficients of the velocity vectors in <cit.> have an inseparable radial component. Therefore, for a spherical region, multiple sets of SH coefficients corresponding to different radii must be calculated. While the SH coefficients of the velocity vectors that do not have a radial component were proposed in <cit.>, the derivations were based on the eigenbeam-ESPRIT, which is mostly used in source localization. In this paper, the derivations are based on the definitions of the velocity vectors. Like <cit.>, the resulting SH coefficients of the velocity vectors in the listening region do not have a radial component. The SH coefficients of the velocity vectors can be used in sound field reproduction with limited number of loudspeakers.
Sound field reproduction methods based on controlling the intensity vectors also exist <cit.>. Intensity vectors are related to human perception of sound at mid to high frequencies. The velocity reproduction method in the paper can be used in tandem with intensity reproduction algorithms to achieve reproduction for the whole audible range.
§ PROBLEM FORMULATION
§.§ Setup of the geometric model
Figure 1 shows the setup of the geometric model. This paper aims to find the representation of the velocity vectors at 𝐫_b, which could be any points within the listening region. Later, the representation of the velocity vectors will be used in sound field reproduction algorithm. The listening region bounded by the dotted line is assumed to be free from sources and scatterers. To facilitate the derivation, a local x^(b)y^(b)z^(b) coordinate system is centered at 𝐫_b≡ O^(b). The x^(b)y^(b)z^(b) coordinate system is the translation of the xyz coordinate system with 𝐫_b as the translation vector. Note that 𝐫 = 𝐫_b + 𝐫^(b). In this paper, the superscript indicates the coordinate system used to express the location. If there are no superscripts, then the location is expressed with respect to the xyz coordinate system.
§.§ Mathematical Background
This subsection formulates the velocity vectors at a single point. Using the concept of pseudo-intensity vectors <cit.>, the velocity vectors along the three Cartesian unit vector directions are the linear combinations of the first order SH coefficients of the pressure distribution. Consider the local region in yellow in Figure <ref>, the pressure at 𝐫^(b)≡ (r^(b), θ^(b), ϕ^(b)) is
p(k, 𝐫^(b)) = ∑_n=0^N∑_m=-n^nβ_n^m(k, 𝐫_b) j_n(kr^(b)) Y_n^m (θ^(b), ϕ^(b)).
in which k is the wavenumber, j_n(·) denotes the spherical Bessel function of the first kind, and Y_n^m(θ^(b), ϕ^(b)) is the SH function of degree n and order m. The SH coefficients β_n^m(k, 𝐫_b) depend on the location of 𝐫_b. From <cit.>, the radial derivative
∂ j_n(kr^(b))/∂ r^(b)|_r^(b)=0 = 1/3 k δ_n, 1
in which δ_n, 1 is the Kronecker delta function.
Let ρ_0 denote the density of the medium and c denote the speed of sound, the velocity at 𝐫_b≡ O^(b) along the x̂, ŷ and ẑ directions are
V_x̂(𝐫_b, k)
= i/kρ_0c∂ p(k, 𝐫^(b))/∂ x|_r^(b)=0
=∑_n=0^N∑_m=-n^nβ_n^m(k, 𝐫_b) ∂ j_n(kr^(b))/∂ r^(b)|_r^(b)=0 Y_n^m(π/2, 0)
= 1/3i/ρ_0c[√(3/8π)β_1^-1(k, 𝐫_b) - √(3/8π)β_1^1(k, 𝐫_b)],
V_ŷ(𝐫_b, k)
= i/kρ_0c∂ p(k, 𝐫^(b))/∂ y|_r^(b)=0
=∑_n=0^N∑_m=-n^nβ_n^m(k, 𝐫_b) ∂ j_n(kr^(b))/∂ r^(b)|_r^(b)=0 Y_n^m(π/2, π/2)
= 1/3i/ρ_0c[-√(3/8π)i β_1^-1(k, 𝐫_b) - √(3/8π)i β_1^1(k, 𝐫_b)],
V_ẑ(𝐫_b, k)
= i/kρ_0c∂ p(k, 𝐫^(b))/∂ z|_r^(b)=0
=∑_n=0^N∑_m=-n^nβ_n^m(k, 𝐫_b) ∂ j_n(kr^(b))/∂ r^(b)|_r^(b)=0 Y_n^m (0, 0)
= 1/3i/ρ_0c√(3/4π)β_1^0(k, 𝐫_b).
This paper aims to find the SH decomposition of the velocity vectors such that
V_ê (𝐫_b, k) =
∑_a = 0^A∑_d = -a^a (ζ_ê)_a^d(k) R_a(kr_b) Y_a^d (θ_b, ϕ_b)
in which ê∈{x̂, ŷ, ẑ} and R_a(kr_b) denotes the radial function. In previous work <cit.>, the SH coefficients were of the form X_a^d(k, r_b), which has an inseparable radial component. To characterize the velocity vectors in the whole listening region, SH coefficients corresponding to different radii r_b must be calculated. In this paper, the aim is to find the SH coefficients (ζ_ê)_a^d(k), which can be used to calculate the velocity vectors at all points within the listening region.
From (<ref>), (<ref>) and (<ref>), suppose
β_1^m(k, 𝐫_b) = ∑_a = 0^A∑_d = -a^a (γ_1^m)_a^d(k) R_a(kr_b) Y_a^d (θ_b, ϕ_b),
with m ∈{-1, 0, 1}, then the SH coefficients of the velocity vectors
(ζ_x̂)_a^d(k) = 1/3i/ρ_0c[√(3/8π)(γ_1^-1)_a^d(k) - √(3/8π) (γ_1^1)_a^d(k)],
(ζ_ŷ)_a^d(k) = 1/3i/ρ_0c[-√(3/8π)i (γ_1^-1)_a^d(k) - √(3/8π)i (γ_1^1)_a^d(k)],
(ζ_ẑ)_a^d(k) = 1/3i/ρ_0c√(3/4π) (γ_1^0)_a^d(k).
§ VELOCITY VECTORS IN THE WHOLE LISTENING REGION
This section derives the SH coefficients of the velocity vectors in the whole listening region. The derivation relies on the sound field translation. In Figure <ref>, the pressure at 𝐫≡ (r, θ, ϕ) is
p(k, 𝐫) = ∑_ℓ=0^L∑_q = -ℓ^ℓα_ℓ^q(k) j_ℓ(kr) Y_ℓ^q (θ, ϕ).
in which α_ℓ^q(k) are the SH coefficients of the global pressure distribution. Using the sound field translation formula <cit.>,
p(k, 𝐫) = ∑_ℓ=0^L∑_q = -ℓ^ℓα_ℓ^q(k) ∑_n=0^N∑_m=-n^n-2mm j_n(kr^(b)) Y_n^m (θ^(b), ϕ^(b))
∑_a = 0^A j_a(kr_b) Y_a^q-m (θ_b, ϕ_b) G_nm^ℓ q a .
The term
G_nm^ℓ q a
= 4π i^n+a-ℓ (-1)^q√((2ℓ+1)(2n+1)(2a+1)/4π) W_1 W_2
in which W_1 and W_2 are the Wigner-3j symbols
W_1=[ ℓ n a; 0 0 0 ]
W_2= [ ℓ n a; -q m q-m ] .
Equation (<ref>) can be rearranged to the form in (<ref>) with
β_n^m(k, 𝐫_b) =
∑_a = 0^A j_a(kr_b)∑_ℓ=0^L∑_q = -ℓ^ℓα_ℓ^q(k) G_nm^ℓ q aY_a^q-m (θ_b, ϕ_b) .
Further, (<ref>) can be rearranged to the form
β_n^m(k, 𝐫_b) = ∑_a = 0^A∑_d = -a^a (γ_n^m)_a^d(k) j_a(kr_b) Y_a^d (θ_b, ϕ_b)
in which (γ_n^m)_a^d(k) are the SH coefficients of β_n^m(k, 𝐫_b). The values of (γ_n^m)_a^d(k) are the linear combinations of α_ℓ^q(k) with G_nm^ℓ q a acting as weights. Note that the order d = q-m. To calculate the velocity vectors at 𝐫_b, (<ref>) should be calculated for β_1^-1(k, 𝐫_b), β_1^0(k, 𝐫_b) and β_1^1(k, 𝐫_b), which is equivalent to (<ref>). Comparing (<ref>) and (<ref>), the radial function in (<ref>) and (<ref>) satisfies R_a(kr_b) = j_a(kr_b).
An operator matrix can be constructed to link the SH coefficients of the global pressure distribution α_ℓ^q(k) to (γ_1^m)_a^d(k) with m ∈{-1, 0, 1} such that
(γ_1^m)(k) = 𝔅_1^mα(k)
in which (γ_1^m)(k) and α(k) are the column vectors formed by concatenating (γ_1^m)_a^d(k) and α_ℓ^q(k), respectively. Note that the operator matrix does not depend on the wavenumber k (also the frequency) because the weights G_nm^ℓ q a are frequency independent.
The calculation of 𝔅_1^m does not require significant resources because only three operator matrices correspond to the conditions m ∈{-1, 0, 1} are required for calculating the velocity vectors. Moreover, because n = 1, the term G_nm^ℓ q a is non-zero only when |ℓ-1| ≤ a ≤ℓ + 1. Furthermore, since W_1 = 0 when a = ℓ, only two conditions a = |ℓ-1| and a = ℓ+1 need to be considered. The dimension of 𝔅_1^m is L^2 by (L+1)^2. This is because if α_ℓ^q(k) is measured up to degree L, the maximum degree of (γ_1^m)_a^d(k) can be calculated is (L-1). For a = L, α_ℓ^q(k) with ℓ = L+1 require to be measured.
Operator matrices that directly link the SH coefficients (ζ_ê)_a^d(k) of the velocity vectors and the SH coefficients α_ℓ^q(k) of the global pressure distribution can be constructed. The operator matrix 𝔅_ê with ê∈{x̂, ŷ, ẑ} should form the link
ζ_ê(k) = 𝔅_êα(k).
in which ζ_ê(k) is the column vector formed by concatenating (ζ_ê)_a^d(k). From (<ref>), (<ref>) and (<ref>),
𝔅_x̂ = 1/3i/ρ_0c[√(3/8π)𝔅_1^-1 - √(3/8π)𝔅_1^1],
𝔅_ŷ = 1/3i/ρ_0c[-√(3/8π)i 𝔅_1^-1(k) - √(3/8π)i 𝔅_1^1(k)],
𝔅_ẑ = 1/3i/ρ_0c√(3/4π)𝔅_1^0(k).
§ ILLUSTRATION OF THE VELOCITY VECTORS IN A REGION
This section illustrates the velocity vectors in a spherical listening region due to a plane wave and a point source. For a plane wave with incident direction (θ_pw, ϕ_pw),
α_ℓ^q(k) = 4π i^ℓ Y_ℓ^q (θ_pw, ϕ_pw) ∀ k.
in which (·) denotes conjugation. For a point source located at 𝐫_ps≡ (r_ps, θ_ps, ϕ_ps),
α_ℓ^q(k) = -ik h_ℓ^(2)(kr_ps) Y_ℓ^q (θ_ps, ϕ_ps)
in which h_ℓ^(2)(·) is the spherical Hankel function of the second kind.
Figure 2 shows the real part of the velocity vectors on the xy plane due to a plane wave with incident direction (θ_pw, ϕ_pw) =(π/2, 2π/3), whereas Figure 3 shows the real part of the velocity vectors on the xy plane due to a point source located at 𝐫_ps = (0.6, π/2, 2π/3). In both figures, the listening region bounded by the red circle is of radius 0.2 meters and the frequency is at 2 kHz. The global pressure SH coefficients are calculated up to degree 10 and the velocity SH coefficients are calculated up to degree 9.
In Figure 2, the directions of the vast majority of velocity vectors are either parallel or anti-parallel, i.e., the velocity vectors are pointing to either ϕ = 2π/3 or ϕ =-2π/3. This should be the case for a plane wave because the velocity vectors should be perpendicular to the wave front. In Figure 3, half of the velocity vectors seem to converge to a point in the direction ϕ =-2π/3, while the other half of the velocity vectors seem to diverge from a point in the direction ϕ =-2π/3. This is because the wave fronts of a point source are spherical.
§ REPRODUCING THE VELOCITY VECTORS IN A REGION
This section presents the sound field reproduction method based on matching the velocity vectors in the listening region. The steps for matching the velocity vectors in the listening region are first outlined.
Assume there are S number of loudspeakers. First, the SH coefficients of the desired global pressure (α^des)_ℓ^q(k) are calculated if the desired virtual source is known or measured using a spherical microphone array. Next, using the operator matrices in (<ref>), the desired SH coefficients of the velocity vectors (ζ_ê^des)_a^d(k) with ê∈{x̂, ŷ, ẑ} are calculated. Then, let (α^Ls)_ℓ^q(k) with s = 1, 2, ⋯, S denote the SH coefficients of the global pressure due to unit output from the s-th loudspeaker, the SH coefficients of the velocity vectors in the listening region due to unit output from the s-th loudspeaker (ζ_ê^Ls)_a^d(k) with ê∈{x̂, ŷ, ẑ} can also be found using the same operator matrices in (<ref>). Finally, a system of equation can be established
ζ^des(k) = 𝐇(k) 𝐰(k)
In (<ref>), ζ^des(k) = [ζ_x̂^des(k)^T, ζ_ŷ^des(k)^T,ζ_ẑ^des(k)^T]^T in which ζ_ê^des(k) is the column vector formed by concatenating (ζ_ê^des)_a^d(k) with ê∈{x̂, ŷ, ẑ} and (·)^T denotes matrix transpose. 𝐇(k) is a matrix of the form 𝐇 = [ζ^L1(k), ζ^L2(k),⋯, ζ^LS(k)]. The s-th column of 𝐇(k) is of the form ζ^Ls(k) = [ζ_x̂^Ls(k)^T, ζ_ŷ^Ls(k)^T,ζ_ẑ^Ls(k)^T]^T in which ζ_ê^Ls(k) is the column vector formed by concatenating (ζ_ê^Ls)_a^d(k) with ê∈{x̂, ŷ, ẑ}. The column vector 𝐰(k) contains the weight (also called driving function) of each loudspeaker. To solve for the weights,
𝐰(k) = 𝐇(k)^†ζ^des(k)
in which (·)^† denotes Moore-Penrose pseudoinverse. In the pressure based method, the loudspeaker weights are found by matching the SH coefficients of the global pressure. Similar to (24), a system of equation can be constructed for the pressure based method
α^des(k) = 𝐆(k) 𝐰(k)
in which α^des(k) is the column vector formed by concatenating (α^des)_ℓ^q(k). The matrix 𝐆 (k) = [α^L1(k), α^L2(k), ⋯, α^LS(k)], in which the s-th column α^Ls(k) is formed by concatenating (α^Ls)_ℓ^q(k). Like (25), the loudspeaker weights 𝐰(k) are found by Moore-Penrose pseudoinverse
Here an example is provided where five loudspeakers are used to reproduce the desired sound field in the listening region. The loudspeakers are located on a circle of radius 1.21 meters. The azimuth angles of the loudspeakers are [0, π/4, 3π/4, 5π/4, 7π/4]. Figure <ref> shows the setup. The loudspeakers are assumed to be point sources. The reproduction region is bounded by the red circle with radius 0.5 meters.The desired sound field is a plane wave with incident direction (θ_pw, ϕ_pw) = (π/2, π/3) rad. The SH coefficients of the pressure α_ℓ^q(k) are truncated to ℓ = 4 and as a consequence, the SH coefficients of the velocity vectors (ζ_ê)_a^d(k) are truncated to a = 3. At each wavennumber k, the dimension of 𝐇(k) is 48-by-5 and the dimension of 𝐆(k) is 25-by-5. The Moore-Penrose pseudoinverse is calculated by the function in MATLAB and the default tolerance is used. Future work should consider finding a more appropriate tolerance value. Figure <ref> shows the reproduced pressure and the velocity on the xy plane at 1 kHz. Figures <ref> (a) and (c) are from the velocity based method proposed in this paper, whereas Figures <ref> (b) and (d) are from the pressure based method. Both methods give similar results.
Figure <ref> analyses the condition numbers of 𝐇(k) in (24) and 𝐆(k) in (26). The condition numbers remain stable, though those of 𝐇(k) are slightly greater than those of 𝐆(k). Figure <ref> shows the velocity reproduction errors. Like <cit.> and <cit.>, the velocity reproduction error is defined as
η(k) = cos^-1 (DOT(k))/π
with
DOT(k) = 𝐕^des (𝐫_b, k)/||𝐕^des (𝐫_b, k)||_2·𝐕^re (𝐫_b, k)/||𝐕^re (𝐫_b, k)||_2
in which 𝐕^des(𝐫_b, k) ≡ [V_x̂^des(𝐫_b, k), V_ŷ^des(𝐫_b, k), V_ẑ^des(𝐫_b, k)] is the desired velocity and 𝐕^re (𝐫_b, k) ≡ [V_x̂^re(𝐫_b, k), V_ŷ^re(𝐫_b, k), V_ẑ^re(𝐫_b, k)] is the reproduced velocity. Here, only the real part of the velocity vectors are considered. The red line and the blue line are the errors averaged across 2821 evaluation points on the 2D plane within the circle of radius 0.5 meters. The errors are similar. The yellow line and the purple line are the errors averaged across 249 evaluation points on the 2D plane within the circle of radius 0.15 meters, which is at the center of the listening region. At frequencies up to 1 kHz, the velocity based method performs significantly better than the pressure based method. It has been suggested that human's perception of sound is often related to the velocity vector at low frequencies <cit.>. Therefore, the velocity based method could result in improved perception when the listener is in the vicinity of the center of the listening region. Note that the reproduction error will be bigger when the desired plane wave is coming from a direction where loudspeaker density is low.
§ CONCLUSION
This paper derived the spherical harmonic coefficients of the velocity vectors in a spherical listening region. The derivation was based on the sound field translation formula. The resulting spherical harmonic coefficients of the velocity vectors do not have a radial component, which means they can be applied to all points within the listening region. The spherical harmonic coefficients of the velocity vectors were successfully used in sound field reproduction. Further work will focus on conducting perceptual tests and comparison with other sound field reproduction methods.
IEEEtran
|
http://arxiv.org/abs/2307.04991v1 | 20230711030458 | Periodic Trajectories and Topology of the Integrable Boltzmann System | [
"Sean Gasiorek",
"Milena Radnović"
] | math.DS | [
"math.DS",
"37J35, 37J39, 37J46, 37C79, 37C83, 70G40"
] |
: A Tensor Compiler with Explicit Data Movement Description and Instruction-level Graph IR
Zixuan Ma, Haojie Wang, Jingze Xing, Liyan Zheng, Chen Zhang, Huanqi Cao
Kezhao Huang, Shizhi Tang, Penghan Wang and Jidong Zhai
Tsinghua University
=================================================================================================================================================================
We consider the Boltzmann system corresponding to the motion of a billiard with a linear boundary under the influence of a gravitational field. We derive analytic conditions of Cayley's type for periodicity of its trajectories and provide geometric descriptions of caustics.
The topology of the phase space is discussed using Fomenko graphs.
Keywords: Integrable Boltzmann system, Kepler problem, billiards, periodic trajectories, Poncelet theorem, Fomenko graphs
MSC2020: 37J35, 37J39, 37J46, 37C79, 37C83, 70G40
§ INTRODUCTION
The Boltzmann system <cit.> consists of a massive particle moving in a gravitational field with a linear boundary (i.e. the wall) between the particle and the centre of gravity.
The reflections off the boundary are absolutely elastic, meaning that the kinetic energy remains unchanged by them, and they obey the billiard reflection law, i.e. the angles of incidence and reflection are congruent to each other.
It was recently shown in <cit.> that this system is integrable, since it has, in addition to the energy, a second integral of motion. That additional integral can be geometrically seen in the fact that, for each trajectory, there is a fixed circle such that all arcs of the trajectory are Kepler conics with one focus at the centre of gravity and the second focus on that fixed circle.
In <cit.>, this system is analysed further, and it is proved that the Boltzmann map is equivalent to a shift on a given elliptic curve, which then implies a Poncelet-style closure theorem in this setting:
if a given trajectory of the Boltzmann system is n-periodic, then each trajectory consisting of arcs with foci on the same circle as the initial n-periodic trajectory is also n-periodic.
This Poncelet-style closure result is the initial point for this paper, from which we find the analytic conditions for periodicity of the trajectories of the Boltzmann system. Those conditions and examples are presented in Section <ref>.
In Section <ref>, we discuss the geometry of Boltzmann trajectories.
We show the existence of caustics and the focal reflection property in Theorems <ref> and <ref>.
In Section <ref>, the phase space of the Boltzmann system is analysed.
We describe in detail the dynamics on the singular level sets in Theorem <ref> and, using the Fomenko graphs and invariants, we provide topological description of the isoenergy manifolds in Theorem <ref>.
Before we proceed to those discussions, we will briefly recall, following <cit.>, the construction and recent results of the Boltzmann system.
§.§ Integrability of the Boltzmann billiard
Here we review of notions and results from <cit.>, which we will immediately use in Section <ref> to derive analytical conditions for periodicity of the Bolzmann system.
In the classical 2-body Kepler problem with an inverse square central force law, solutions to the reduced problem in some inertial reference frame are conics with one focus at the origin. The position r and linear momentum p of the reduced body relate to the angular momentum L by
L = r×p,
which is an integral of motion and defines the plane of motion.
The total energy E and Laplace–Runge–Lenz vector A = p×L-𝐫|𝐫| are also integrals of motion. The conic is an ellipse if E <0, a parabola if E=0, and one branch of a hyperbola if E>0.
Following the notation from <cit.>, we consider the Boltzmann system in the plane with coordinates x_1, x_2 with the centre of gravity at the origin and wall at x_2 =1.
Vector 𝐀 will then also be in that plane and we denote its components by A_1, A_2, while 𝐋 is orthogonal to the plane and we denote its third component by L.
It was shown in <cit.> that
D := L^2 - 2A_2
is an integral of motion for the Boltzmann system.
In the coordinates (x_1,x_2), the Kepler conic is given by
: x_1^2 + x_2^2 = (D+2A_2 - A_1 x_1 - A_2 x_2)^2.
One focus of is at the origin and the other one is F_2:=(A_1/E, A_2/E) that lies on a circle of radius R/|E| with
R^2 := 1+2DE + 4E^2 centred at the point (0,2).
The point (0,2), the centre of the circle , is symmetric to the origin Ø(0,0) with respect to the wall.
We will show in Section <ref> that this point will play other significant roles in the dynamics of the Bolzmann system, see Theorems <ref> and <ref>.
The pair (E,D) determines the level set X(D,E) of the system.
According to <cit.>, the level set X(D,E) has a compactification that is a smooth projective curve of genus 1 whenever
the following conditions hold:
D^2≠4, 1+2ED+4E^2≠0, D+2E≠0.
Such level sets correspond to the non-degenerate Liouville tori.
If some of the inequalities eq:singular-conditions are not satisfied, the level set X(D,E) will be singular.
We note that in Section <ref>, where our goal is to find analytic conditions for periodicity, only non-singular level sets are of interest, so we will assume there that the conditions eq:singular-conditions hold.
On the other hand, the singular level sets will be analysed in more detail in Section <ref>.
In the non-degenerate case, the elliptic curve corresponding to X(D,E) is:
y^2=(1-s^2)(1-k^2s^2),
with
k^2=D+4E-2R/D+4E+2R.
That elliptic curve can also be seen in the affine space with coordinates (x,A_1,A_2) as:
A_1^2+A_2^2-4EA_2=1+2DE,
x^2+1=(A_2+D-A_1x)^2,
where (x,1) is a reflection point on the wall and 𝐀=(A_1,A_2) is the corresponding Laplace-Runge-Lenz vector.
The Boltzmann map is given in those coordinates as a composition of two involutions i and j:
i(x,A_1,A_2)=(x',A_1,A_2),
j(x,A_1,A_2)=(x,A_1',A_2'),
with
x'=-2(A_2+D)A_1/1-A_1^2,
A_1'=(x^2-1)A_1-2xA_2+4xE/x^2+1,
A_2'=-2xA_1-(x^2-1)A_2+4x^2E/x^2+1.
The involution i maps one intersection point of the Kepler ellipse to the other one.
On the other hand, the involution j preserves the intersection point, but maps the vector 𝐀 before the reflection to the vector 𝐀' after the reflection, i.e. switches to the next Kepler ellipse arc of the Boltzmann trajectory.
Both involutions i, j have fixed points <cit.>, thus they represent point reflections on the Jacobian of the elliptic curve, and
the Boltzmann map, as their composition j∘ i, is equivalent to following shift of the Jacobian of that curve:
u↦ u+∫_-1^s_0ds/y,
where
s_0=D+2E+R/D+2E-R,
which is the shift by the divisor Q_+-P_+, where points P_+, Q_+ are given by:
P_+(-1,0),
Q_+(
D+2E+RD+2E-R,
-4i(D+2E)R(D+2E-R)^2√((D + 2E)(D + 4E + 2R))).
Thus the n-periodicity of any trajectory on X(D,E) is equivalent to:
n(Q_+-P_+)∼0.
Since that condition does not depend on the initial point of motion, but only on the constants D, E, we will have the Poncelet property: n-periodicity of one trajectory of the Boltzmann system implies that all trajectories with the same constants D, E of motion will also be n-periodic <cit.>.
§ PERIODIC TRAJECTORIES OF THE BOLTZMANN SYSTEM
In this section, we will first make a very brief review of results connected with closure theorems of Poncelet type and the corresponding Cayley-type conditions, in particular in the context of billiards.
After that, we provide in Theorem <ref> the analytic conditions for periodicity of the Boltzmann billiard, and then illustrate it by several examples.
We note that closure theorems of Poncelet type and the corresponding analytic conditions originate in classical works of XIXth century mathematicians.
Namely, the classical Poncelet porism states that the existence of a closed polygon inscribed in one conic and circumscribed about another one implies the existence of infinitely many such polygons;
moreover, each point of the circumscribed conic is a vertex of one of them and all of those polygons have the same number of sides <cit.>.
While Poncelet's approach was synthetic geometric, Jacobi gave alternative proof using addition formulas for elliptic functions, see Jacobi, Schoenberg.
Explicit analytic conditions for closure were derived by Cayley <cit.>.
Modern algebro-geometric approach to Poncelet theorem and those analytic conditions can be found in GH1977,GH1978.
The interest in the Poncelet theorem, its generalizations and its applications has been renewed in the last few decades, and there is a large body of works regarding that.
A recent detailed account on the history of Poncelet theorem can be found in DC2016a,DC2016b.
More about analytic conditions for the Poncelet theorem and the generalizations, with a review of modern development in the theory of billiards can be found in DR2011,DR2014.
Among various generalisations of the Poncelet theorem, those in distinct geometries or those where a potential field is present were developed, see for example Veselov1990,DJR2003,GKT2007,KT2009,GR2021 and AF2006,Fed2001.
For even more results, see references therein.
The conditions for n-periodicity of the Boltzmann billiard are given in the following theorem.
The trajectories of the Boltzmann system with integrals D and E satisfying eq:singular-conditions are n-periodic if and only if
B_3B_4B_m+1B_4B_5B_m+2B_m+1B_m+2B_2m-1=0
with
n=2m ≥ 4,
or
B_2B_3B_m+1B_3B_4B_m+2B_m+1B_m+2B_2m=0
with
n=2m+1≥3,
where B_0, B_1, B_2, B_3, …are the coefficients in the Taylor expansion of:
√((2(D+2E)ξ-R)(4R(D+2E)^2ξ^2 + 2(D+2E)(D^2+2DE-2)ξ +R))
around ξ=0.
The algebro-geometric condition for n-periodicity is n(P_+-Q_+)∼0, with points P_+, Q_+ given by (<ref>) on the curve (<ref>) <cit.>.
We note that P_+ is a branching point of that curve.
To simplify the calculations, we make the coordinate transformation (s,y)→(ξ,η) such that ξ(P_+)=∞, ξ(Q_+)=0:
ξ=1/s+1-D+2 E-R/2 D+4 E,
η=y/(s+1)^2·(D+2E)√((D + 2E)(D + 4E + 2R)).
In these new coordinates, the elliptic curve (<ref>) becomes
:
η^2 =(2(D+2E)ξ-R)(4R(D+2E)^2ξ^2 + 2(D+2E)(D^2+2DE-2)ξ +R).
We now derive the Cayley-type conditions similarly as in <cit.>.
The divisor condition n(Q_+ - P_+)∼ 0 is equivalent to the existence of a meromorphic function on with a unique pole at P_+ and unique zero at Q_+ such that the pole and zero are both of multiplicity n.
We denote by Ł(nP_+) the linear space of all meromorphic functions on which have a pole of order at most n at P_+ and are holomorphic elsewhere.
A basis of Ł(nP_+) is:
{1, ξ, ξ^2, …, ξ^m, η, ηξ, …, ηξ^m-2},
for n=2m;
{1, ξ, ξ^2, …, ξ^m, η, ηξ, …, ηξ^m-1},
for n=2m+1.
It can be derived, in the same way as it is done in <cit.>, that there is a nontrivial linear combination of these functions with a zero of order n at ξ=0 if and only if the stated determinant conditions hold.
Now, we use the analytic conditions from Theorem <ref> to construct examples of periodic trajectories.
[Period 3]
The Cayley-type condition for a 3-periodic trajectory is B_2=0.
The coefficient B_2 is calculated from the Taylor expansion as stated in Theorem <ref>:
B_2 = -(D+2E)^2(4(D^2-4)E^2 + 4D(D^2-3)E+D^4-2D^2-3)/2|1+2DE + 4E^2|.
By assumption eq:singular-conditions, D+2E ≠ 0 and 1+2DE + 4E^2 ≠ 0, so B_2 =0 is equivalent to
4(D^2-4)E^2 + 4D(D^2-3)E+D^4-2D^2-3=0,
which is precisely the condition stated in <cit.>.
In Figure <ref>, two 3-periodic trajectories are shown.
[Period 4]
Theorem <ref> states that the analytic Cayley-type condition for 4-periodic trajectories is B_3=0, which is equivalent to:
(D^2+2DE-1)((D+2E)^2(D^2-4)-1)=0.
Examples are shown in Figure <ref>.
[Period 5]
The condition for a 5-periodic trajectory is B_2 B_4 - B_3^2=0, which is equivalent to:
0 = D^12-6 D^10+3 D^8+60 D^6-169 D^4+42 D^2+5
+64 (D^2-4)^3 E^6
+64 (D^2-4) (3 (D^2-7) D^2+52) D E^5
+16 (D^2-4)(15 D^6-90 D^4+251 D^2+4) E^4
+32 (D^2-4) (5 D^6-25 D^4+71 D^2+13) D E^3
+4 (386 D^6-452 D^4-537 D^2+15 (D^2-8) D^8+52) E^2
+4 (3 D^10-21 D^8+46 D^6+22 D^4-257 D^2+47) D E.
In Figure <ref>, two such trajectories are shown.
[Period 6]
The analytic condition B_3 B_5 - B_4^2=0 for 6-periodic trajectories is equivalent to:
0 = [ 4(D^2-4)E^2 + 4D(D^2-3)E+D^4-2D^2-3 ] [(D^2+2DE-1)^2-4(D+2E)^2 ]
×[-1+(D^2-4)(D+2E)^2 ((3D^2-4)(D+2E)^2+16E(D+2E)+6) ].
The first factor in the above expression is the condition for 3-periodic trajectories, so we find the solutions from the other two factors. Two such examples are shown in Figure <ref>.
The trajectories in the right-hand sides of Figures <ref> and <ref> meet the wall at a right angle at its leftmost and rightmost points. These perpendicular trajectories are fixed in the involution j of <cit.> and occur when A_2 = 2E/1± R.
§ GEOMETRIC PROPERTIES OF THE BOLTZMANN SYSTEM
In planar elliptical billiards, each trajectory has a caustic, that is a curve which is touching all segments of that trajectory.
Moreover, the focal property also holds: if a segment of a given trajectory contains a focus of the boundary, then then the next segment will containing the other focus.
In this section, we will prove that the trajectories of the Boltzmann system have caustics and that the focal property can also be formulated and proved in this case.
The first step is to establish that for any Kepler conic , there are particular confocal conics that are tangent to . See Figure <ref>, left.
The Kepler conic given by (<ref>) is touching two unique conics with foci Ø(0,0) and (0,2), which are given by:
_±: x_1^2/(R±1/2E)^2 -1 + (x_2-1)^2/(R±1/2E)^2 =1.
Moreover, we have:
* the points of tangency of and _± are
B_± = (A_1α_±, 2+(A_2-2E)α_±), with
α_±=
(4E^2-(R±1)^2) (R(R±1)-2E(A_2-2E))/2ER^2(4E^2-(R±1)^2)-8A_1^2E^3;
* the slope of the joint tangent line to and _± at B_± is:
m_± = A_1(R±1)(2E(A_2 - 2E)-(R±1)R)/2A_1^2 E (R±1) - R(A_2 - 2E)(4E^2- (R±1)^2);
* the points B_±, and the non-origin focus F_2 of are collinear and they lie on the following line:
Ł: (2E-A_2)x_1 + A_1 x_2 = 2 A_1.
Follows by a straightforward calculation.
Now, Proposition <ref> directly implies the existence of caustics for the Bolzmann system.
For each fixed pair (E,D) satisfying the conditions eq:singular-conditions there are two unique conics _+ and _- with foci Ø(0,0) and (0,2) such that all arcs of each trajectory on the level set X(E,D) are touching those two conics.
The existence of caustics is illustrated in Figure <ref>, were 100 iterations of the Boltzmann map for (E,D) = (-7/24,7/4) are shown.
The arcs of the trajectory have a hyperbolic caustic above the wall and the an elliptical caustic with tangencies both above and below the wall.
Next, we will show that the point which is symmetric to the centre with respect to the wall has the focusing property for the trajectories of the Boltzmann system, see Figure <ref>.
Suppose that an arc of a given trajectory of the Bolzmann systems contains the point (0,2).
Then all arcs of that trajectory will also contain point . Furthermore, the trajectory asymptotically converges to the vertical axis x_1=0.
The first part of this theorem follows from Theorem <ref>, when the caustic _- given by eq:GeneralCaustic is degenerate, i.e. R-1=2E, which implies that D=2.
The corresponding Kepler ellipses have major axis 2a=-1/E.
Denote by x_0 a point on the wall which belongs to one such ellipse _0,
and let x_1, x_2, … and _1, _2, …be the subsequent reflection points and arcs of the Boltzmann trajectory.
The origin Ø is the focus of each elliptic arc _i and we denote by F_i the other focus.
By the caustic property, the trajectory starts at x_0, travels along the first Kepler ellipse _0, passes through the point of the degenerate caustic _1 and intersects the wall at x_1. The “string construction" applied to ellipses _0 and _1 implies -1/E = |Ox_1| + |x_1 F_0| and -1/E = |O x_1| + |x_1 F_1| respectively.
Those equalities give |x_1 F_0| = |x_1 F_1|, so that we may think of F_0, F_1 as the intersection of and a circle centred at x_1 with radius R_01. By the position of x_1, it is necessarily the case that the x_2-coordinate of F_1 is larger than the x_2-coordinate of F_0.
Repeating this process with _1 and _2 produces the next focus F_2 with larger x_2-coordinate than that of F_1. We find the x_2-coordinates of the F_i are monotonically increasing and are bounded above by the top of the circle at coordinates (0,2-1/E). As the F_i approach this point, the defining components of each Kepler ellipse (x,A_1,A_2) approach (0,0,-1), respectively, which corresponds to the vertical axis x_1=0.
§ THE PHASE SPACE
In this section, we will analyse the phase space of the Boltzmann system.
While the discussion in <cit.> assumes complex values of D, E, we will consider only real values to correspond to the real motion in the Kepler problem.
Moreover, we will consider only the part of the phase space containing bounded trajectories, i.e. when the arcs of the trajectories are ellipses.
We note that the last assumption implies that each trajectory is bounded and has infinitely many reflections.
Boundedness is important for us, since then the level sets and the isoenergy manifolds in the phase space will be compact, which allows us to use the Fomenko invariants for the topological characterization.
Subsection <ref> contains a brief review of the bifurcation set for the Boltzmann system, based on <cit.>.
Then, in Subsection <ref>, we provide the analysis of the motion on the singular level sets.
In Subsection <ref>, we present the topological description of the compact isoenergy manifolds for the Boltzmann system, using Fomenko graphs.
§.§ The bifurcation set
The bifurcation set for the Boltzmann system can be represented in (E,D)-plane, with restrictions on D and E to produce real motion when the arcs of trajectories are ellipses or degenerate to straight segments.
In particular, these restrictions determine an infinite region in the plane bounded by the curves 1+2DE + 4E^2=0, E=0, D+2E=0, D=2, as shown in the right-hand side of Figure <ref> <cit.>. We call this region R.
As explained in Theorem <ref>, the caustics _± are only dependent upon the values of (E,D).
The curve _- is an ellipse for all (E,D) ∈R. However, the curve _+ is a hyperbola for (E,D) ∈R with D<2, an ellipse for (E,D) ∈R with D>2, and degenerate consisting of the two points Ø (0,0) and (0,2) when (E,D) ∈R with D=2 and E>-1/2.
The left-hand side of Figure <ref>, shows an arc of a trajectory corresponding the level set X(E,D).
§.§ Singular level sets
The singular level sets of the Boltzmann system are placed on the boundary of the region R and within R on the line D=2. The next theorem describes the dynamics on those level sets.
The singular level sets in the phase space of the Boltzmann system consist of:
* A single closed orbit corresponding to the limiting motion on the wall along the minor axis of the ellipse _+, for each (E,D) such that D+2E=0 and 0<D<2;
* A single closed orbit, corresponding to a 2-periodic trajectory on an ellipse whose minor axis is placed along the wall, for each (E,D) such that 1+2DE+4E^2=0 and D>2;
* A single closed orbit corresponding to a periodic trajectory lying on the x_2-axis and bounded by the point (0,-1/E) when D=2 and -1<E<-1/2;
* A closed orbit and a separatrix when D=2 and -1/2<E<0. The closed orbit corresponds to a periodic trajectory lying on x_2-axis and bounded by the point (0,2) and the wall, and the separatrix contains the trajectories with elliptic arcs that contain the point (0,2).
The singular level sets correspond to the values E, D which do not satisfy the inequalities eq:singular-conditions.
We consider separately each of the three possible cases.
*First, we assume D+2E=0.
By setting x_2 =1, we can find the x_1-coordinates of the reflection points of the particle with the wall:
x_1^± = -A_1(A_2+D) ± L √(D+2E)/1-A_1^2,
for A_1^2 ≠ 1.
From there, the Kepler conics will be hyperbolas tangent to the wall from above when E>0; or parabolas and ellipses tangent to the wall from below for E ≤ 0 if and only if Δ =0.
Thus, the limiting motion for the Boltzmann system, whenever E≤0, will be along the minor axis of the limiting caustic _+, see Figure <ref>.
*Second, we assume 1+2DE+4E^2=0.
Under this assumption, the Kepler conics are ellipses orthogonal to the wall at both intersection points, i.e. their minor axes lie on the wall x_2 =1 and major axes lie on the vertical coordinate axis, x_1=0. See Figure <ref>.
We can derive this condition using the involutions i, j, see (<ref>). First, a necessary condition for such a 2-periodic ellipse is x_1^+ = -x_1^-, or equivalently, i(x_1,A_1, A_2) = (-x_1, A_1, A_2). This means
-2(A_2+D)A_1/1-A_1^2 -x_1 = -x_1 A_1 =0 or A_2 = -D,
which correspond to conics whose intercepts with the wall are symmetric about x_1=0. To further correspond to a 2-periodic trajectory, we seek fixed points of j:
j(x_1,A_1,A_2) = (x_1,A_1,A_2) A_1 = x_1(2E-A_2).
Satisfying equations (<ref>) and (<ref>) leads to several possibilities.
Case 1: Suppose A_1=0 and x_1 ≠ 0. Then A_2 = 2E, and the second focus of the conic (<ref>) is F_2 = (0,2), and the equation of the conic simplifies to
x_1^2/-D/2E-2 + (x_2-1)^2/1/4E^2=1.
By equation (3) of <cit.>, these conditions also mean R^2=0 1+2DE + 4E^2 =0. This matches Felder's 2-periodic condition and is consistent with the geometric description that the second focus F_2 in the Boltzmann system lies on a circle of radius R/|E| centred at (0,2). In turn, equation (<ref>) becomes
x_1^2/1/4E^2-1 + (x_2-1)^2/1/4E^2=1.
This equation represents an ellipse in the (x_1,x_2) plane for -1/2 < E < 1/2 and E ≠ 0. Moreover, in the (E,D)-plane, the curve 1+2DE+4E^2=0 is a hyperbola with asymptotes D+2E=0 and E=0, and branches lying in the second and fourth quadrants. Further assuming D+2E>0 to ensure two distinct intersection points of the ellipse with the wall, this forces the pair (E,D) to lie on the branch in the second quadrant. Therefore we have a 1-parameter family of ellipses corresponding to 2-periodic trajectories given by equation (<ref>) for -1/2<E<0. This family of ellipses approaches a degenerate ellipse (or segment connecting the foci) as E → -1/2^+ and approaches an ellipse of unbounded major and minor axes as E → 0^-.
Case 2: Suppose A_1=0 and x_1=0. The Kepler conic equation (<ref>) reduces to
x_1^2+x_2^2 = (2A_2+D-A_2x_2)^2
and must pass through the point (0,1), which implies A_2 = -D ± 1. However, this reduces equation (3) of <cit.> to
(D+2E)(D±2)=0,
with D having the opposite sign of A_2. By assuming D+2E>0, both possibilities turn equation (<ref>) into x_1=0. Thus the only such conic is the degenerate conic x_1=0. Dynamically, this can be seen as the particle repeatedly bouncing directly up and down with no component of motion to the left or right.
Case 3: Suppose A_2 = -D. Since L^2=D+2A_2 = -D, we have D <0, which does not belong to the region R.
*Third, assume D^2=4.
Since D is positive within the bifurcation set R, we have D=2.
In the Boltzmann system, the Kepler conic is an ellipse whose foci are F_1 = (0,0) and F_2 = (A_1/E,A_2/E), and whose major axis has length 1/|E|. The second focus F_2 lies on a fixed circle of radius R/|E| centred at (0,2). The degenerate Kepler ellipses will occur when, as F_2 varies along , the length of the minor axis approaches 0, which occurs if and only if F_2 is a distance 1/|E| from the origin. Thus we seek solutions to the system
x_1^2 + (x_2-2)^2 = 1+2DE+4E^2/E^2 and x_1^2+x_2^2 = 1/E^2.
The solutions are (x_1,x_2) = (±√(4-D^2)/2E,-D/2E). Thus the line x_2=-D/2E is a line which can intersect in zero, one, or two points; depending on the location of F_2 relative to this line, the Kepler conic (<ref>) will be an ellipse, hyperbola, or degenerate. See Figure <ref>.
The condition D=2 corresponds to the case when the line x_2 = -D/2E is tangent to the circle at the point (0, -D/2E). As such, all points except the point of tangency are allowable locations for F_2 and the Kepler conic is an ellipse.
§.§ Topological description of isoenergy manifolds
In this section we give a topological description of the isoenergy manifolds for the Boltzmann system using the topological invariants developed by Fomenko and his school, see BMF1990, BF2004, BBM2010 and references therein.
Those invariants can be used for 3-dimensional submanifolds of the phase space of integrable systems with two degrees of freedom.
The Liouville folitation of such submanifolds is represented by a graph, which is obtained by shrinking each leaf of the foliation to a point.
Thus, the smooth families of Liouville tori will create edges which connect together at vertices corresponding to the singular leaves.
Each type of the singular leaf corresponds to a letter-atom.
To complete the topological description, each edge and some subgraphs are marked with rational and natural numbers.
A detailed account of those invariants, together with theoretical bakcground and examples can be found in <cit.>.
The Fomenko graphs were extensively used for studying the topology of integrable billiards: elliptical ones DR2009, DR2010, within the domains bounded by confocal parabolas <cit.>, with Hooke's potential <cit.>, in the Minkowski plane and on ellipsoids and the hyperboloid in the Minkowski space DR2017,DGR2022, non-convex billiards <cit.>, billiards with slipping <cit.>, and broader classes of systems and their bifurcations SRK2005,VK2018, FV2019, PRK2018,PRK2021.
For the larger body of works on the topic, see also the references therein.
The subsets of the phase space for the Boltzmann system corresponding to fixed negative values of E are compact 3-dimensional manifolds which are represented by the Fomenko graphs as shown in Figure <ref>.
In order to determine the Fomenko graphs and the corresponding numerical invariants, we will analyse behaviour near the singular level sets.
*First, we consider the case -1<E<-1/2.
For any pair (E,D), such that -1<E<-1/2 and -2E<D<2, the corresponding level set is a single Liouville torus, thus one edge of the Fomenko graph corresponds to those tori when E is fixed and D varies.
According to Theorem <ref>,
each level set corresponding to D∈{2E,2} consists of a single closed orbit, i.e. the A-atom of the Fomenko graph.
Consider the nature of the Boltzmann trajectories near each of the boundary components. Near D+2E=0, the two branches of the caustic _- are near the wall from above and below, keeping the arcs of the Kepler ellipses trapped vertically between the wall and _-, and horizontally within _+. These trajectories limit to the motion along the wall, between the vertices of the minor axis of _+, see Figure <ref>.
Near the upper bound D=2, the length of the minor axis of _+ shrinks to 0.
The Kepler conics are bounded vertically between the wall and _-, and horizontally between the shrinking arcs of _+.
These trajectories limit to a simple up-and-down 2-periodic trajectory between the wall at (0,1) and the lowest point of , which has coordinates (0,2-R). See Figure <ref>.
In order to calculate the numerical invariants, we need to choose two admissible bases on a Liouville torus corresponding to a point on the edge of the Fomenko graph.
Each of those two bases is chosen accordingly to one of the singular level sets corresponding to the endpoints of the edge of the graph, and then the numerical invariants are calculated from the matrix of the transformation which maps one basis to the other one.
For details, see <cit.>.
In this case, one admissible basis, taken according to the singular level set with D+2E=0, can be chosen so that it consists of a preimage of a segment orthogonal on the wall and the segment placed on the wall.
The other admissible basis, taken according to the singular level set with D=2, can be chosen to consist of the preimages of the same segments, but in the reversed order.
*Second, we analyse the case when -1/2< E <0.
For any pair (E,D), such that -1/2<E<0 and D+2E>0, D≠2, 1+2DE+4E^2>0, the corresponding level set is a single Liouville torus, thus the Fomenko graph has two edges: each one connecting the singular level set corresponding to D=2 with the two remaining singular level sets.
According to Theorem <ref>,
that level set corresponds to the A^*-atom, while the two other level sets correspond to the A-atoms of the Fomenko graph.
Near the lower bound D+2E=0, the behaviour is the same as described in the previous case and shown in Figure <ref>, thus the admissible basis can be chosen as in the previous case.
Near the boundary D=2, the caustic _- narrows around the x_2-axis, as shown in the left of Figure <ref>.
Near the upper boundary 1+2DE + 4E^2=0, the radius of the circle shrinks to 0, and the trajectories are trapped between the inner elliptic caustic _- and the outer elliptic caustic _+. These trajectories limit to the 2-periodic trajectory which aligns with the upper half of the outer caustic _+, as shown in Figure <ref>.
This discussion shows that in this case, the Boltzmann system will be Liouville equivalent to the billiard within half-ellipse, as the Fomenko graph shown in the right-hand side of Figure <ref> is identical, see for example DR2009,Fokicheva2014.
§.§ Acknowledgment
The research of M. R. was supported
by the Australian Research Council, Discovery Project No. DP190101838 Billiards within confocal quadrics and beyond, and by the Science Fund of Serbia grant Integrability and Extremal Problems in Mechanics, Geometry and Combinatorics, MEGIC, Grant No. 7744592.
amsalpha
*
|
http://arxiv.org/abs/2307.03943v1 | 20230708093708 | Camouflaged Object Detection with Feature Grafting and Distractor Aware | [
"Yuxuan Song",
"Xinyue Li",
"Lin Qi"
] | cs.CV | [
"cs.CV"
] |
Camouflaged Object Detection
with Feature Grafting and Distractor Aware
*Corresponding author. This work is supported in part by the National Natural Science Foundation of China (Grant No. 41927805).
Yuxuan Song
College of Computer
Science and Technology
Ocean University of China
Qingdao, China
[email protected]
Xinyue Li
College of Computer
Science and Technology
Ocean University of China
Qingdao, China
[email protected]
Lin Qi*
College of Computer
Science and Technology
Ocean University of China
Qingdao, China
[email protected]
August 12, 2023
=========================================================================================================================================================================================================================================================================================================================================================================================
The task of Camouflaged Object Detection (COD) aims to accurately segment camouflaged objects that integrated into the environment, which is more challenging than ordinary detection as the texture between the target and background is visually indistinguishable. In this paper, we proposed a novel Feature Grafting and Distractor Aware network (FDNet) to handle the COD task. Specifically, we use CNN and Transformer to encode multi-scale images in parallel. In order to better explore the advantages of the two encoders, we design a cross-attention-based Feature Grafting Module to graft features extracted from Transformer branch into CNN branch, after which the features are aggregated in the Feature Fusion Module. A Distractor Aware Module is designed to explicitly model the two possible distractor in the COD task to refine the coarse camouflage map. We also proposed the largest artificial camouflaged object dataset which contains 2000 images with annotations, named ACOD2K. We conducted extensive experiments on four widely used benchmark datasets and the ACOD2K dataset. The results show that our method significantly outperforms other state-of-the-art methods. The code and the ACOD2K will be available at https://github.com/syxvision/FDNet.
Camouflaged Object Detection, Transformer, Convolutional Neural Networks, Distractor
§ INTRODUCTION
Camouflage refers to creatures use the similarity of color, texture, etc. to hide themselves in the background without being discovered by predators. Inspired by the natural camouflage of animals such as chameleon, artificial camouflage was created to deceive human's visual inspection. The computer vision task of Camouflaged Object Detection (COD) aims to accurately segment concealed objects from the background environment, which has recently attracted interests of researchers and facilitated many applications in different fields. However, due to its inherent nature, locating and segmenting of camouflaged objects is much more difficult than ordinary object detection, which makes the COD task extremely challenging.
Recently, many deep learning based methods have been proposed to solve the COD task and have achieved impressive progress. SegMaR <cit.> introduces a Magnification Module to iteratively upsample images to segment camouflaged objects with complex structures. ZoomNet <cit.> showed that multi-scale information is very effective for resolving the appearance and shape variation of objects at different scales. This model uses a shared encoder to encode images of three scales. However, shared encoders cannot take full advantage of multi-scale images and may cause error propagation. Therefore, we proposed use two different encoders in parallel, and designed a Feature Grafting Module for better feature transfer.
Existing COD methods only consider the background as distractor, such as SINetv2 <cit.> which uses reverse attention to erase the foreground and use the background to mine potential camouflage areas. However, in the COD task, due to the similarity between the object and the surrounding environment, there are two different types of distractors as shown in Figure <ref>: 1) in the first row, the stem of the branch is misclassified as camouflaged object since its texture is very similar to the target. 2) in the second row, the lower half of the animal's body is blended with the black background, and the network misses it. This observation inspired us that explicitly modeling semantic features of these two types of distractors with supervision can improve detection performance.
In this paper, we propose a Feature Grafting and Distractor Aware network (FDNet) for camouflaged object detection. We employ Transformer and CNN to exploit information on different scales, where Transformer models long-term dependence for rich context information and CNN mines local details for edge information. To aggregate the features from these two encoders, we developped a Feature Grafting Module based on cross-attention, which fuses features in a bottom-up manner to produce a coarse prediction map. A Distractor Aware Module was designed to guide the learning by modeling the two types of distractor and exploring potential camouflage regions under the supervision of groundtruth. Benefited from the designed modules, our proposed network can better recognize distractors and achieve better detection performance.
In addition, we contribute to the COD community with a new COD dataset under the fact that most existing COD datasets consists of natural camouflaged animals, whereas only a small portion are camouflage created by human. To address this limitation, we collected and annotated 2000 images of artificial camouflages from the Internet, constituting the current largest artificial camouflage dataset, named ACOD2K. Figure <ref> shows some exmaple images of this dataset. We compared our proposed model with other state-of-the-art models on public datasets and this new dataset.
Our contributions.
1) Camouflaged objects can be segmented more accurately by our proposed FDNet which featured by the multi-scale feature extractor and the explicitly modeling of distractors. 2) The parallel encoding and the Feature Grafting Module are able to extract and fuse multi-scale features, which are utilized by the Distractor Aware Module to incorporate two different types of distracting semantic cues for target segmentation. 3) A large artificial camouflage dataset, ACOD2K, was proposed and tested to compare the performance of our proposed model and other existing models.
§ RELATED WORK
The release of large-scale camouflage datasets (such as COD10K <cit.>) has triggered the invention of many deep learning-based methods, which have shown impressive results for the COD task. A majority of the recent work are inspired by how human observers visually search camouflaged targets, as SINet <cit.>, ZoomNet <cit.> and SegMaR <cit.>. SINet was designed to have two stages for searching and recognition respectively. ZoomNet <cit.> and the recently proposed SegMaR <cit.> enlarge the image in potential target regions to further mine distinguishing clues in a coarse-to-fine manner. Other work proposed to use auxiliary cues to improve performance, such as making better use of boundary clues <cit.> and frequency-domain perceptual cues <cit.>. The joint task learning was also found to be useful when SOD(Salient Object Detection) and COD are simultaneously considered to boost each other's performance <cit.>.
Unlike CNN, Transformer has a global receptive fields, which can capture richer contextual information. Its success in the natural language processing has been observed by computer vision tasks. UGTR <cit.> uses Bayesian and Transformer to infer areas of uncertainty. To take the advantage of both architecture, we employ CNN and Transformer together to enhance the performance of the model.
§ OUR METHOD
§.§ ACOD2K dataset
Camouflage images can be categorized as natural or artificial. Natural camouflage refers to the ability of animals to blend into their surroundings through changes in their physiological characteristics, making them difficult to detect by predators. Artificial camouflage refers to camouflage designed using human reasoning through methods such as painting and camouflage uniforms, with a specific aim to target human visual perception characteristics in order to more effectively deceive the human visual system. It has great practical value for tasks such as disaster-assisted search and rescue operations. Leveraging this advantage, we have constructed ACOD2K, the largest artificial camouflage dataset.It's worth noting that current camouflaged object detection methods are exclusively trained on natural camouflaged images. This is because existing datasets mainly feature natural camouflaged animals, making it difficult to train models that can accurately detect artificial camouflage. For instance, the two most commonly used training datasets in COD tasks, CAMO and COD10K, have an imbalanced distribution of natural and artificial camouflage images. Of the 2,500 images in CAMO, less than 10% are artificial camouflage images. Similarly, COD10K, a large-scale dataset with 10,000 images covering multiple camouflaged objects in natural scenes divided into 5 super classes, lacks artificial camouflage images. This highlights the need for datasets like ACOD2K, which has a significant number of artificial camouflage images, to enable the development of more robust camouflaged object detection methods.ACOD2K are consisted by 2000 images, where 1500 images are with camouflaged objects, 400 images are with non-camouflaged objects, and 100 are background images. Most of the images are collected from the Internet (80%), searched using the keywords such as “military camouflage”, “body painting”, “Ghillie suit”, and the rest are from public COD and SOD dataset. Figure <ref> shows some examples of ACOD2K, from which it can be seen that artificial camouflages are intentionally made by humans using materials and colors to conceal the whole target body in the background. High-quality and fine-grained pixel-level matting annotations were carried out for each image. In order to guarantee the quality, an additional researcher further verified all annotations.
§.§ Overall Architecture
The overall structure of our proposed FDNet is shown in Figure <ref>. It is divided into two stages, the first stage generates a coarse feature map, and the second stage refines the feature map based on the Distractor Aware Module. FDNet uses multi-scale images as input. Unlike ZoomNet which uses shared encoders, we instead used the PVT <cit.> for the main scale and used the Res2Net50 <cit.> for the sub-scale, which constitue a parallel encoder. We designed a Feature Grafting Module based on cross-attention to aggregate features of these two scales, which not only extracts valuable semantic clues, but also fully suppresses redundant information and background noise. Then the multi-scale features are sent to the Feature Fusion Module for decoding, it achieved more efficient transmission of encoded information through bottom-up dense connections. Finally, Send it into the dual-branch Distractor Aware Module to refine the feature map, and use ground truth for supervision.
§.§ Feature Grafting Module
For the main scale image, we use PVT as the backbone to extract feature maps of 4 stages, which can be denoted as g_i;i=1,2,3,4. Since the features with too small resolution will lose most of the information, we did not use g_4. For the sub-scale image, we use Res2Net50 as the backbone to extract a set of feature maps, which can be denoted as f_i;i=1,2,3,4.We choose to graft feature on feature groups with the same feature resolution. Since the resolution of the sub-scale is twice that of the main scale, the resolution of g_i,f_i+1;i=1,2,3 is same. For the first two groups, we use pooling for feature grafting to maintain and highlight useful information. In neural networks, deeper features have richer semantic clues. For g_3 extracted using Transformer, which has rich global context information. For f_4 extracted using CNN, which has edge detail information complementary to global information. We believe that using simple fusion methods such as pooling, concatenation, or addition is not effective enough for mutual learning between these two features, and cannot well suppress background noise from CNN. Therefore, we use cross-attention to incorporate the global semantic cue learned from the main scale into each pixel of the sub-scale. The detail is shown in Figure <ref>.
F_4 = Softmax(f_4^Q ·g_3^K^T/√(k))· f_4^V
f_4^Q,f_4^V=θ(f_4) g_3^K=ϕ(g_3)
θ() uses flatten and permute operations to transform f_4∈ R^C × H × W into f_4^'∈ R^HW × C. Same as self attention, we have passed it through Layer Normalization and linear transformation to get f_4^Q, f_4^V, the process of g_3 getting g_3^K through ϕ() is same as θ.
§.§ Feature Fusion Module
Unlike the previous method that directly performs convolution after channel concat on the adjacent feature layer to output the prediction map, we fuse deeper features as a semantic filter. We first element-wise multiply it with the current layer features to suppress background interference that may cause abnormality, and then preserve the original information by residual addition. The details are shown in Figure <ref>.
The features by the Feature Grafting Module are denoted as F_i;i=1,2,3,4. Since F_4 is the last layer of features, we directly perform 3x3 convolution on F_4 to form F̂_̂4̂, For F_3, we perform filtering on F4 to form F_3^filter. Correspondingly, F_2^filter and F_1^filter are shown in the following formula. We take the top-level feature F̂_̂1̂ as the final result of the Feature Fusion Module, and the coarse prediction is F_c.
F̂_̂4̂ = Conv3(F_4)
F_3^filter = Conv3(Conv1(F_4↑_2)
F̂_̂3̂ = Conv3([F_3^filter * F_3+F_3;F̂_̂4̂])
F_2^filter = Conv3(Conv1([F_4↑_4;F_3↑_2]))
F̂_̂2̂ = Conv3([F_2^filter * F_2+F_2;F̂_̂3̂])
F_1^filter = Conv3(Conv1([F_4↑_8;F_3↑_4;F_2↑_2]))
F̂_̂1̂ = Conv3([F_1^filter * F_1+F_1;F̂_̂2̂])
F_c=Conv3(F̂_̂1̂)
Conv3, Conv1 represents 3x3, 1x1 convolution respectively, ↑ refers to upsample, [;] means channel concatenation, and * represents element-wise multiplication.
§.§ Distractor Aware Module
We believe that there are two types of distractors present in the coarse prediction map generated in the first stage, namely: (i) objects that are camouflaged but not detected, referred to as false negatives, ξ_fn, and (ii) objects that are not camouflaged but are misdetected, referred to as false positives, ξ_fp. To address this, we propose a dual-branch Distractor Aware Module that explicitly models the potential interference and aims to improve the accuracy of the segmentation results.As illustrated in the lower part of Figure <ref>, we first use F̂_̂1̂∈ R^64 × H × W to extract ξ_fn features through a lightweight encoder, the encoder is designed as two 3x3 convolutions, following BN and Relu. In order to make better use of ξ_fn, We generated the predicted map of ξ_fn. During training, the ground truth of ξ_fn is approximated by the difference between the ground truth of the segmentation map and the coarse predicted map F_c. Then we concate ξ_fn with F̂_̂1̂ and send it into the attention mechanism to generate augmented weights ξ_fn^a. The attention mechanism aims to enhance the features of possible ξ_fn regions. we perform element-wise multiplication for ξ_fn^a and original feature F̂_̂1̂, and then perform residual connection to generate the enhanced feature F_fn. Now, the network can better segment those regions that are ignored as background.
ξ_fn = Small Encoder(F̂_̂1̂)
fn_GT = GT - φ(F_c)
Similarly, we use the same encoder to extract ξ_fp features and the predicted map. The ground truth of ξ_fp is approximated by the difference between the coarse predicted map F_c and the ground truth of the segmentation map. we concate F_fn with ξ_fp on channel dimension, then send it into the refine unit consisting of two 3x3 convolutional layers to capture richer context information, so as to better distinguish the misdetected areas. Finally, it is subtracted from F_fn to obtain the prediction feature that suppresses ξ_fp distractor. After 3x3 convolution, we obtain the final prediction map F_p. φ() represents binarization operation.
ξ_fp = Small Encoder(F̂_̂1̂)
fp_GT = φ(F_c) - GT
§.§ Loss Functions
Our network has two types of supervision. For the loss L_F_p of the prediction map, same as most COD methods, we use the weighted BCE loss and the weighted IOU loss(Loss1). For the loss L_fn, L_fp of fn and fp, we use the weighted BCE loss(Loss2). The loss function is as follows.
Loss = L_F_p+ λ L_fn + β L_fp
Loss1 = L_BCE^ω+L_IOU^ω
Loss2 = ∑_i(-[N_p/N_p+N_n(y_i)log(p_i)+
N_n/N_p+N_n(1-y_i)log(1-p_i)])
In the experiment, λ and β are set to 10. N_n and N_p represent the number of pixels of positive pixels and negative pixels, respectively.
§ EXPERIMENTS
§.§ Experiment Setup
Datasets.We perform experiments on four COD benchmark datasets and ours ACOD2K. Public datasets include CAMO <cit.>, CHAMELON <cit.>, COD10K <cit.> and NC4K <cit.>, Like the previous methods, we use 3040 images from COD10K and 1000 images from CAMO for training, and other datasets for testing. For the ACOD2K, we divide it into train set and test set according to the ratio of 8:2.
Evaluation Criteria.We use four metrics that commonly used in COD tasks to evaluate the model performance: Mean absolute error(MAE) <cit.>, F_β^w-measure <cit.>, E-measure <cit.>, S-measure <cit.>.
Implementation Details.Our network uses PVT <cit.> and Res2Net50 <cit.> pretrained on ImageNet as backbone. We use data augmentation strategy of random flips and rotations. During training, in order to balance efficiency and performance, the size of the main scale is set to 288x288. The batchsize is 32. We use SGD with momentum and weight decay initialized to 0.9 and 0.0005 as the optimizer, the learning rate is initialized to 0.05, follows a linear decay strategy, and the maximum training epoch is set to 50. The entire network is performed on NVIDIA GeForce GTX 3090Ti.
§.§ Comparisons with State-of-the-arts
To show the effectiveness of our method, we compare with 10 SOTA methods on public datasets. On ours ACOD2K, we compare with 3 COD methods. For fair comparison, the results of these models are either provided by the authors or retrained from open source code.
Quantitative Evaluation.As shown in the Table <ref>, our method achieves the superior performance on multiple evaluation metrics. Specifically, our method increases F_β^ω by 1.5%, 3.3%, 6%, 1.9% over the second-best method on all four datasets. Table <ref> shows the FDNet outperforms the second-best method on the four metrics by increasing 1.4%, 2.4%, 1%,0.4% on the ACOD2K.
Qualitative Evaluation.We further show the qualitative comparison of FDNet with other methods, presented in the form of visualization maps. As shown in Figure <ref>, our method not only recognizes them well, but also segments fine edges. In addition, in the second row, our method also works well with the presence of distractor in the image.
§.§ Ablation Studies
As shown in the Table <ref>, we conducted five ablation experiments. In A, we removed all key modules, only used single-scale images, and simply perform convolution after channel concatenation to get the final prediction map. In B, we replaced the Feature Fusion Module on the basis of A. In C, we use multi-scale images, but share the encoder, and the features of different scales are fused by pooling. In D, we use CNN and Transformer to encode the images of two scales respectively, and use the Feature Grafting Module to fuse feature. In E, we added Distractor Aware Module based on D. Effectiveness of multi-scale.
By fusing features of different scales, we can explore richer semantic representations. From the second and third rows in the table <ref>, it can be seen that the performance of C is significantly better than that of B, especially in the COD10K, S_α, F_β^w , E_ϕ, ℳ increased by 4.4%, 8.5%, 2.9%, 0.9% respectively.Effectiveness of Feature Fusion.
From the first and second rows of the table <ref>, B's performance on the four indicators increased by 0.8%, 2.2%, 1.1%, 0.4% on average, this is due to the positive impact of the Feature Fusion Module's bottom-up dense feature-guided structure.
Effectiveness of Feature Grafting.
Compared with C, all indicators of D on the two datasets have different degrees of increase, especially F_β^w on the CAMO increased by 1%. This is largely because Feature Grafting Module aggregates the advantages of two different types of encoders well.
Effectiveness of Distractor Aware.
E outperforms D on all datasets, and the visual comparison results in Figure <ref> also clearly verify that the module can mine potential interference areas.
§ CONCLUSION
We propose a novel COD network, FDNet. First, we design the Feature Grafting Module to extract valuable semantic information and suppress background noise. Then, in the Distractor Aware Module, we obtained more accurate prediction map by refining the two types of distractors. Additionally, we also construct a new artificial camouflage dataset, ACOD2K. Experiments on four public datasets and ACOD2K show that our method outperforms other methods significantly both qualitatively and quantitatively. In the future, we will explore more effective supervision methods for two types of distractors.
IEEEtran
|
http://arxiv.org/abs/2307.07322v2 | 20230714125751 | A Context-Aware Cutting Plane Selection Algorithm for Mixed-Integer Programming | [
"Mark Turner",
"Timo Berthold",
"Mathieu Besançon"
] | math.OC | [
"math.OC",
"cs.LG",
"90-05"
] |
Representation Learning With Hidden Unit Clustering For Low Resource Speech Applications
Varun Krishna, Student Member, IEEE, Tarun Sai, Student Member, IEEE, and
Sriram Ganapathy, Senior Member, IEEE,
V. Krishna, S. Tarun and S. Ganapathy are with the Learning and Extraction of Acoustic Patterns (LEAP) lab, Department of Electrical Engineering, Indian Institute of Science, Bangalore, India, 560012.
e-mail: {varunkrishna, tarunsai, sriramg}@iisc.ac.in
August 12, 2023
=====================================================================================================================================================================================================================================================================================================================================================================================
The current cut selection algorithm used in mixed-integer programming solvers has remained largely unchanged since its creation.
In this paper, we propose a set of new cut scoring measures, cut filtering techniques, and stopping criteria, extending the current state-of-the-art algorithm and obtaining a 5% performance improvement for SCIP over the MIPLIB 2017 benchmark set.
§ INTRODUCTION AND RELATED WORK
[1]Corresponding author.
We focus on this paper on the selection of cutting planes, an essential component of the branch-and-cut framework, which is the main algorithmic paradigm to solve Mixed-Integer Linear Programs (MILP) classically defined as:
𝐱argmin{𝐜^⊺𝐱 | 𝐀𝐱≤𝐛, 𝐥≤𝐱≤𝐮, 𝐱∈ℤ^|𝒥|×ℝ^n - |𝒥|}
A cut, parameterised by (, ) ∈^n+1, is an inequality ^⊺≤β that is violated by at least one solution of the LP relaxation but that does not increase the optimal value of the problem when added i.e. it is valid for (<ref>).
It is used to tighten the Linear Programming (LP) relaxation of (<ref>).
Cuts are generated in rounds, where we refer to the process of computing cuts, applying a subset of the computed cuts, and resolving the LP as a separation round. In general, cuts are cheap to compute, and more are computed than actually applied to the LP. The process of deciding which cuts to add to the LP is called cut selection, and is the focus of this paper, in which we introduce a new cut selection algorithm.
Cut selection was considered largely “solved", following computational results on a variety of MILP solvers <cit.>, in that cheap heuristic selection rules were sufficient for good performance. Recently this conclusion has been challenged, with largely machine learning-driven research attempting to extract further performance, see <cit.> for an overview.
Such research, however, is often limited in how it can be deployed in a MILP solver due to the complexity of deploying black-box predictors and their brittle generalisation.
In this paper we introduce a new cut selection algorithm, which revises the three major aspects of cut selection, namely cut scoring measures, cut filtering techniques, and stopping criteria.
We determine default parameter values of our algorithm by leveraging methodology <cit.> and software <cit.> from the algorithm configuration community.
Our algorithm and the accompanying default values will be integrated in the next release of SCIP <cit.>.
§ NEW CUT SELECTION TECHNIQUES
Our new cut selection algorithm expands and generalises the existing one from SCIP <cit.> in three aspects, namely, cut scoring, cut filtering, and stopping criteria.
For all three aspects, we incorporate additional information from the cut and from the current context in which the separation takes place.
§.§ Cut Scoring
The most studied aspect of cut selection is the scoring of cuts, which subsumes their ranking. This is both due to the large variety of scoring measures that exist, see <cit.>, and the common assumption of a fixed selection rule outside of scoring <cit.>. We introduce a new set of cut scoring measures that incorporate information from other non-cutting plane aspects of the MILP solving process.
§.§.§ Pseudo-cost Scoring
Pseudo-costs <cit.>, the dominant decision-maker in state-of-the-art MILP branching rules <cit.>, estimate scores for branching candidates based on the historical objective value improvement observed when branching on the candidates. It has recently been shown in <cit.> that cut scoring measures can complement pseudo-cost-based branching rules for improved solver performance. We address the reverse direction here, and use pseudo-costs to augment cut scores, where we let i be the pseudo-cost of variable x_i.
We define the pseudo-cost scoring of a cut, (, ) ∈^n+1, as the expected objective change as predicted by the pseudo-costs and express it as follows:
:= ∑_i, α_i≠ 0i(|i - α_i^⊺ - β/^2|)
We normalise the result by the maximum pseudo-cost score of a cut in the separation round, so all pseudo-cost scores are in the range [0,1].
§.§.§ Lock Scoring
Down-locks and up-locks, introduced in <cit.>, can be interpreted for a variable x_i, as the number of constraints that “block" the shifting of x_i in the direction of the variable's lower or upper bound. Let i be the number of down-locks and i be the number of up-locks for variable x_i. We define the lock scoring of a cut, (, ) ∈^n+1, as:
:= 1/n∑_i, α_i ≠ 0(i + i)
As with the pseudo-cost measure, we normalise the result by the maximum among all cuts in the separation round so the scores are in the range [0,1]. Additionally, as it is initially unclear if we want to reward a cut featuring variables with many locks, i.e. further restrict heavily restricted variables, we also introduce the complement of this measure, i.e. promote cuts that restrict variables with few restrictions.
§.§.§ Sparsity Scoring
Sparsity refers to the fraction of zero coefficients in a cut. In general, dense cuts slow down LP solves <cit.>. We therefore introduce a score that promotes sparse cuts, which, given a cut (, ) ∈^n+1, and the parameters ∈_+, and ∈ [0,1], is defined as:
:= max{ -/· α_0 , 0 }
§.§ Cut Filtering
The standard approach in the literature for cut filtering is the parallelism-based approach <cit.>. The approach selects the highest-scoring cut, removes any remaining cut from consideration that is too parallel to the selected cut, and then repeats until some stopping criteria is met or no more non-filtered cuts are left to be selected. Such a procedure was shown to be absolutely necessary for good performance in <cit.>, however that is only w.r.t. trivial selection approaches. We therefore present and discuss two other approaches for cut filtering.
§.§.§ Density Filtering
Density-based filtering, unlike parallelism-based filtering, is performed at the start of the selection process. It immediately removes any generated cut above a given density threshold. Preliminary results for such an approach already exist, see <cit.>, and suggest that 40% density, i.e. a cut with at most 40% of its entries being non-zero, is a potentially desirable setting.
§.§.§ Parallelism-Based Penalties
It is possible to simply impose a penalty on any cut parallel to previously-added ones as opposed to outright removing it from consideration. In our algorithm, this involves reducing the score of all remaining cuts considered parallel to a selected cut.
If the score falls below some threshold, which we set to 0, then the cut is removed from consideration.
§.§ Stopping Criteria
In general for computational studies, and MILP solvers by extension, there is a hard-coded limit on the number of cuts that can be added per round. Typically, this limit is not reached, with the standard parallelism filtering algorithm removing enough cuts by itself as detailed in Subsection <ref>. As we are employing alternate filtering algorithms, however, we believe it is necessary to have an instance-dependent limit. We therefore place a limit on the number of non-zeros added to the LP at each separation round, where the limit is a multiple of the number of columns in the LP.
§ EXPERIMENTS AND COMPUTATIONAL RESULTS
We conduct the following two experiments: First we select an appropriate training set for determining the best algorithm configuration of our new cut selector. Second we deploy the best found configuration on a test set to determine the actual MILP solver performance improvement.
Our training set is a combination of instances from the MIPLIB 2017 collection set[MIPLIB 2017 – The Mixed Integer Programming Library <https://miplib.zib.de/>.] <cit.>, strIPlib[<https://striplib.or.rwth-aachen.de/login/>] <cit.>, and SNDlib-MIPs <cit.>. We select the training set by removing any instance from consideration that takes less than 5 seconds or longer than 120 seconds to solve, takes longer than 10 seconds to presolve, or solves in less than 50 or more than 20000 nodes. This selection procedure is repeated with an optimal primal solution preloaded. From the remaining instances, we create a mapping to a feature space, where the features are the proportion of each constraint and variable type, the average row density, the maximum row density, and the objective vector density. Using this mapping, we select 40 instances that are maximally diverse by maximising the distance between any two selected instances in the feature space. Our test set is the MIPLIB 2017 benchmark set.
For all experiments, SCIP 8.0.3 <cit.> is used, with random seeds {1,2,3,4,5}, PySCIPOpt <cit.> as the API, and Xpress 9.0.2 <cit.> as the LP solver restricted to a single thread. For training, we use in non-exclusive mode a cluster equipped with Intel Xeon Gold 6342 CPUs running at 2.80GHz. For testing we run in exclusive mode on a cluster equipped with Intel Xeon Gold 5122 CPUs running at 3.60GHz, where each run is restricted to 48GB memory and a 2-hour time limit. The code used for all experiments is available and open-source[<https://github.com/Opt-Mucca/Context-Aware-Cut-Selection>], and will be integrated in the next release of SCIP.
In addition to the new cut scoring measures introduced in Subsection <ref>, we use standard scoring measures from the literature. Namely, we use efficacy, , expected improvement, , objective parallelism, , and integer support, <cit.>. We normalise efficacy and expected improvement as in <cit.>, ensuring values in the range [0,1]. Notably, we disable directed cutoff distance due to it being unfairly advantaged by preloading the optimal solution in training.
All cut scoring measures are included in a weighted sum rule, where the multipliers are hyperparameters.
§.§ Training
We use SMAC <cit.> to determine the parameter values of our new cut selection algorithm. We run 5 instances of SMAC on our training set using the random seeds 1-5 and a limit of 300 complete passes of SMAC over our training set. Our training metric is the ratio of shifted geometric means of solve time compared to SCIP default with a shift of one second.
The runs generated largely different parameter choices, with all choices resulting in an 8-15% improvement w.r.t. solve time over SCIP default on the training set. The generated parameter choices differed mostly in how to score a cut, where we could find no common parameter choices. For filtering and penalising cuts, however, there was an overwhelming consensus for the following parameters: dense cuts should be filtered at some 40-50% threshold, and parallel cuts should not be filtered, but rather lightly penalised. We additionally note that due to the instance restrictions put on our training set, we do not expect the level of performance to perfectly generalise to larger MILP instances.
§.§ Test
As the cut selection performance space is highly non-linear, aggregating the parameter choices from the various SMAC runs is difficult. We therefore use a combination of the results from SMAC (e.g. density and parallelism filtering), and expert knowledge on the impact of cuts on the overarching solving process.
Comparing our new cut selector to SCIP default, we observe in Table <ref> an improvement of 5% for solve time and 8% for number of nodes.
The distribution of relative time improvement over the instance-seed pairs is visualised in Figure <ref>, and is verified as statistically significant by a Wilcoxon signed-rank test (with a p-value below 5%).
§ CONCLUSION
In this paper we introduced a new cut selection algorithm, which combines new context-aware scoring measures, filtering methods, and stopping criteria. We incorporate the algorithm into SCIP, and obtain a 5% improvement in solve time over the MIPLIB 2017 benchmark set.
§ ACKNOWLEDGEMENTS
We thank Antonia Chmiela and Michael Winkler for helpful discussions about the experimental design. The work for this article has been conducted in the Research Campus MODAL funded by the German Federal Ministry of Education and Research (BMBF) (fund numbers 05M14ZAM, 05M20ZBM). The described research activities are funded by the Federal Ministry for Economic Affairs and Energy within the project UNSEEN (ID: 03EI1004-C).
unsrt
splncs04
|
http://arxiv.org/abs/2307.04205v2 | 20230709152618 | Extending the Forward Forward Algorithm | [
"Saumya Gandhi",
"Ritu Gala",
"Jonah Kornberg",
"Advaith Sridhar"
] | cs.LG | [
"cs.LG"
] |
Sharper Asymptotically Optimal CDC Schemes via Combinatorial Designs
Yingjie Cheng, Gaojun Luo, Xiwang Cao, Martianus Frederic Ezerman, and San Ling
Y. Cheng, and X. Cao are with the Department of Mathematics, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China, and also with Key Laboratory of Mathematical Modeling and High Performance Computing of Air Vechicles (NUAA), MIIT, Nanjing 210016, China, e-mails: { xwcao,chengyingjie}@nuaa.edu.cn
G. Luo, M. F. Ezerman, and S. Ling are with the School of Physical and Mathematical Sciences, Nanyang Technological University, 21 Nanyang Link, Singapore 637371, e-mails: { gaojun.luo, fredezerman, lingsan}@ntu.edu.sg.
G. Luo, M. F. Ezerman, and S. Ling are supported by Nanyang Technological University Research Grant No. 04INS000047C230GRT01. X. Cao, Y. Cheng, and G. Luo are also supported by the National Natural Science Foundation of China under Grant 12171241.
August 12, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The Forward Forward algorithm, proposed by Geoffrey Hinton in November 2022, is a novel method for training neural networks as an alternative to backpropagation. In this project, we replicate Hinton's experiments on the MNIST dataset, and subsequently extend the scope of the method with two significant contributions. First, we establish a baseline performance for the Forward Forward network on the IMDb movie reviews dataset. As far as we know, our results on this sentiment analysis task marks the first instance of the algorithm's extension beyond computer vision. Second, we introduce a novel pyramidal optimization strategy for the loss threshold - a hyperparameter specific to the Forward Forward method. Our pyramidal approach shows that a good thresholding strategy causes a difference of upto 8% in test error. [Our code can be found here: https://github.com/Ads-cmu/ForwardForwardhttps://github.com/Ads-cmu/ForwardForward] Lastly, we perform visualizations of the trained parameters and derived several significant insights, such as a notably larger (10-20x) mean and variance in the weights acquired by the Forward Forward network.
§ INTRODUCTION
Backpropagation is the most widely used optimization algorithm for training neural networks today. However, while widely successful, the backpropagation algorithm has 3 important limitations.
First, backpropagation is biologically implausible. There is no convincing evidence that the cortex of the brain explicitly propagates error derivatives or stores neural activities for use in a subsequent backward pass <cit.>. Moreover, backpropagation through time (the standard technique for training RNNs) is especially implausible, as the brain does not freeze in time in order to update neural connections.
The second major drawback with backpropagation is the need for perfect knowledge of the forward pass computation in order to compute the correct derivatives. This prevents us from being able to insert "black boxes" or non-differentiable components in the neural network.
Lastly, the need to store forward pass computations and backpropagate errors across layers makes backpropagation power and memory intensive. In order to train really large networks without consuming much power, different methods for training networks will need to be explored.
Geoffrey Hinton proposed the Forward Forward algorithm in November 2022, with the goal of enabling neural networks to learn continuously without the need for backpropagation <cit.>. In his paper, Hinton suggests two significant advantages of the forward-forward algorithm over backpropagation. First, it provides a more plausible model of learning in the human brain, and second, it can make use of very low-power analog hardware, thereby enabling much larger networks to be trained with much less power.
This project investigates the performance of the Forward Forward algorithm in training neural networks. The key contributions of our work are as follows. First, we have replicate Hinton's original results on the MNIST dataset. Next, we establish a baseline performance for the Forward Forward network on the IMDb movie reviews dataset. As far as we are aware, our results on this sentiment analysis task mark the first instance of the algorithm’s extension beyond computer vision. Lastly, we propose a new hyperparameter optimization strategy for tuning the loss threshold of the Forward Forward network. This pyramidal optimization strategy yields a 11% reduction in network error rate.
Apart from the above, we also report two more results. First, we performed extensive ablations on various activation functions for the FF algorithm and report some negative results. Second, we perform some visualisation of the weights learnt by the FF algorithm and record our observations. A detailed discussion of these results, as well as hypotheses that explain them are as left as future work.
§ LITERATURE REVIEW
§.§ Other forward-pass based approaches to training neural networks
A primary objective of the Forward Forward algorithm is to emulate learning processes observed in human brains. To achieve this, the algorithm adheres to a brain model known as Predictive Coding (PC) <cit.>, which characterizes the brain as a predictive system where each layer strives to enhance the accuracy of its own inputs. As each layer in the Forward Forward algorithm adjusts its gradients based on input data, the algorithm can be regarded as an application of PC in the domain of machine learning. Various forms of PC have previously been investigated in machine learning <cit.>. Specifically, supervised predictive coding greatly resembles the forward forward algorithm, as it only involves forward passes up and down the network (from the data to the labels and vice-versa). The recently developed Predictive Forward Forward Algorithm <cit.> extends and integrates the concepts of FF and PC, resulting in a robust neural system capable of learning a representation and generative model. Preliminary results on the MNIST dataset suggest the potential of this brain-inspired, backpropagation-free approach for credit assignment within neural systems.
Another forward pass based approach was proposed by Dellaferrera et al. <cit.> They propose a learning rule that replaces the backward pass with a second forward pass in which the input signal is modulated based on the error of the network. This learning rule addresses various issues such as weight symmetry, dependence of learning on non-local signals and the freezing of neural activity during error propagation. The authors demonstrate the effectiveness of their approach on MNIST, CIFAR-10, and CIFAR-100 datasets.
§.§ Directional approaches to training neural networks
Schmidhuber et al. <cit.> propose the Variable Shared Meta Learning (VSML) algorithm, which unifies various meta-learning approaches. The authors demonstrate on the MNIST and CIFAR10 datasets that simple weight-sharing and sparsity in an NN can express powerful learning algorithms in a reusable fashion. They implement the backpropagation learning algorithm solely by running in forward-mode, eliminating the need for a backward pass.
Baydin et al. <cit.> present a method for computing gradients based solely on the directional derivative that one can compute exactly and efficiently via the forward mode. They call this formulation the forward gradient. They demonstrate forward gradient descent on the MNIST dataset, showing substantial savings in computation and enabling training up to twice as fast in some cases.
§ DATASET DESCRIPTION
Our baseline implementation and threshold ablations were performed on the MNIST dataset <cit.>. In order to demonstrate the extensibility of the Forward Forward network to other domains, we picked a sentiment analysis task using the IMDb reviews dataset <cit.>. The dataset contains 25,000 positive and 25,000 negative movie reviews, split equally into test and train datasets. Each review was preprocessed by removing HTML tags, stop words and by performing stemming. Next, each word of the review was passed through Word2Vec <cit.> to get a lower dimensional representation of the word. Word2Vec uses a 2 layer neural network with no non-linearity in the hidden layer, and therefore can be approximated as a single layer network. Single layer neural networks do not backpropagate gradients and hence Word2Vec is an acceptable feature extractor for the Forward Forward network.
§.§ Generating Positive and Negative Data
MNIST images must have labels embedded into them before they can be passed through the Forward Forward network. This is done by utilising the black border around MNIST images. In order to append label data to images, we set the pixel corresponding to the label amongst the first 10 pixels to 255, and reduce the rest to a value of 0. These images, with correctly appended labels, constitute positive data for the model. For our sentiment analysis task, the positive/negative label was one-hot concatenated to the Word2Vec feature vector.
The Forward Forward algorithm also requires negative data during training. Negative data is generated by randomly appending the wrong label to the input image/review before passing it through the network. We use an equal number of positive and negative samples during training.
§ MODEL DESCRIPTION
§.§ Layer Training
Our network consists of several layers, each with its own loss function. The goal of the loss function is maximise the layer activation for positive data while minimising the layer activation for negative data. More concretly, the training loss for each layer is the difference between the sum of neuron activations to positive/negative inputs and a threshold hyper-parameter value. This threshold hyperparameter is known as the loss threshold, and we perform extensive hyperparameter tuning of this threshold in our analysis. It is worth nothing that during the forward pass, the input is normalized to prevent its magnitude from affecting the layer's output activation magnitude.
§.§ Network Architecture
As shown in Figure 2, the network consists of 4 fully connected layers with 2000 neurons each. Note that there is no output label layer. We use the Adam optimizer to optimize the network.
§.§ Inference
There are 2 plausible methods of inference for the Forward Forward network. The first method involves using a 1 layer classification neural network that uses the FF network activations as features for its classification task. An alternate method involves appending a label to the input image and passing it through the network. This process is repeated 10 times (once for each label), and the label that produces the maximum activation is chosen as the output label for the image. Between the two methods, we report results for the approach that uses the 1 layer classification network as it emperically provides better results.
§.§ Baselines
The original paper uses a fully connected neural network trained using backpropagation as a baseline network. This backpropagation-trained network has a 1.4% test error. We were able to reproduce this baseline and achieve a similar error rate.
The original paper proposed two major ways in which networks could be trained using Forward Forward (FF): an Unsupervised and a Supervised example of FF. For this report, our focus was to reproduce the training pipeline and architecture for the Supervised example of FF. The original paper was able to get around 1.36% with their forward forward architecture. Although the paper made it clear that the architecture they used included 4 fully connected layers with 2000 neurons each with ReLU activation, their loss function, optimizer, learning rate, threshold, and scheduling strategy was not elaborated on. Since the forward-forward learning dynamics are highly different from backpropagation, our standard intuitions and starting points did not work well. High thresholds performed better than lower ones, potentially since higher thresholds allow a wider range of squared activations for negative samples. Increasing threshold from 0.5 to 10 improved our model performance by approximately 8%. However, this also had the side effect of significantly slowing down convergence. We hypothesize that this happened because the model's weights were unable to change fast enough to adjust to the large threshold using our low learning rate. We used the Adam optimizer initialized with a learning rate of 0.01 (unusually high for Adam) to have our model converge within 100 epochs with the high threshold of 10. Using this approach, we were able to achieve a test error of 1.37% (comparable to backprop baselines).
§ RESULTS AND DISCUSSION
§.§ Forward Forward on Sentiment Analysis
A key requirement for biological plausibility is the ability for a training algorithm to work across multiple domains such as vision and language. In this study, we investigated the performance of the Forward Forward algorithm on the IMDB movie reviews dataset, with the aim of assessing its ability to generalize beyond Computer Vision and work effectively on Natural Language Processing tasks. Our results show that the Forward Forward algorithm achieved an accuracy of 84.86% on the test set after 6 epochs, indicating its potential in learning patterns beyond visual data. To compare its performance with traditional backpropagation-based networks, we trained a fully connected network with the same architecture as the Forward Forward network, and an output layer. We observed that the backpropagation-based network was also able to achieve an accuracy of 85% on the test set, albeit in fewer epochs, consistent with convergence findings in computer vision tasks.
Our findings suggest that the Forward Forward algorithm can be an effective alternative to backpropagation-based networks in the context of NLP tasks. Further research is warranted to explore the performance of the Forward Forward algorithm in training embeddings from scratch, and in performing more complex NLP tasks such as language modelling.
§.§ Threshold Ablations and Analysis on Forward Forward
The Forward Forward algorithm introduces a new training hyperparameter - the loss threshold. Finding the appropriate threshold for this hyperparameter is crucial for the algorithm to work well. At each layer of our algorithm, the sum of squared activations for every example in our batch is calculated and compared against this loss threshold. The layer's goal is to maximize this sum so that it exceeds the threshold for positive examples and falls below the threshold for negative examples. This is important because the subtracted value is passed into our loss function, which penalizes larger values.
Hinton, in his experiments, uses a threshold equal to the number of neurons in the layer. This can be rewritten as a threshold proportional to the number of neurons in a given layer, with a proportionality factor (k) of 1. We set this as our baseline for further study.
In our initial experiments, we varied the value of k and tested values ranging from 0.005 to 10 to determine the values that give us the best results. We found that a k between 0.3 and 0.5 gives better results than our initial baseline.
These initial experiments led us to consider whether it would make sense to have different thresholds for different layers. We tried using monotonically increasing values of k across layers and also tested monotonically decreasing values. We found that the monotonically increasing the threshold across layers performs distinctly better than other approaches. We hypothesize that larger threshold values in later layers improves performance as the later layers are responsible for higher level feature recognition while the lower layers tend to behave as feature extractors. We refer to this monotonically increasing threshold strategy as the pyramidal approach to loss threshold tuning.
We also explored the use of a threshold scheduler in our ablations. The intuition behind this approach is that the model might benefit from having lower thresholds initially while it is still learning, and then gradually increasing the penalty as it trains more. Our results show that using a threshold scheduler provides a promising direction compared to our baseline.
Overall we see an error reduction from 1.8% to 1.3% using this approach.
§.§ Activation Function Analysis
In his forward-forward algorithm, Hinton employs the ReLU activation function. We conducted an investigation into the performance of other commonly used activation functions within the context of the forward-forward algorithm. As illustrated in the graph below, most activation functions perform well with this algorithm. However, one notable observation is that bounded activations such as tanh and sigmoid do not train at all for certain thresholds.
Even after tuning the threshold hyperparameter, these activations do not perform as well as others. One possible hypothesis here may be that this is due to the nature of the objective functions - maximising a bounded activation may require incredibly high weight values, even to exceed low thresholds.
§.§ Weight analysis
Lastly, we analyze the weight matrices of the trained network. Our analysis revealed several key findings. Firstly, we observe that the range of weights for the Forward Forward trained network are much larger (10-20x) than that of the backpropagation trained network, with a range of -14.36 to 18.19 compared to -0.67 to 0.43, respectively. Notably, weight decay was not applied in either case. This disparity in weight ranges may be attributed to the objective function of the Forward Forward algorithm, which aims to improve performance by encouraging highly positive activations for positive samples and highly negative activations for negative samples.
Secondly, we found that the range of weights decreased as we went deeper into the network, although the underlying reasons for this pattern requires further investigation. Finally, we observed a strong spike in the weights connected to the encoded label part of the input, which is consistent with the notion that this aspect of the input contains crucial information for the network to predict the correct label.
Taken together, our weight matrix analysis provides additional insights into the workings of the Forward Forward algorithm and suggests potential avenues for future research in understanding the mechanisms underlying its effectiveness.
§ CONCLUSION
Our study explored the effectiveness of the Forward Forward algorithm on data beyond Computer Vision and conducted experiments to understand the effects of various parameters of the algorithm. We found that the Forward Forward algorithm performs comparably to its backpropagation variant and uncovered an important relationship between the threshold parameter and the depth and size of each layer of the forward forward network. Both of these contributions are novel and warrant further interest in exploring the Forward Forward algorithm.
Our study sets the stage for further exploration of more architectures and hyptheses regarding Forward Forward. This could include more complex NLP tasks where text embeddings can be trained from scratch, and the Forward Forward algorithm can be applied over time. Additionally, future work could investigate the use of more biologically inspired activations, such as the negative log of the student T distribution that bring the algorithm even closer to biological alignment. Overall, our findings suggest that the Forward Forward algorithm holds promise as a viable alternative to backpropagation and merits further exploration in the context of various machine learning algorithms that are biologically aligned.
|
http://arxiv.org/abs/2307.07353v1 | 20230714140125 | Monte Carlo Graph Search for Quantum Circuit Optimization | [
"Bodo Rosenhahn",
"Tobias J. Osborne"
] | quant-ph | [
"quant-ph"
] |
The building blocks of quantum algorithms and software are quantum gates, with the appropriate combination of quantum gates leading to a desired quantum circuit. Deep expert knowledge is necessary to discover effective combinations of quantum gates to achieve a desired quantum algorithm for solving a specific task. This is especially challenging for quantum machine learning and signal processing. For example, it is not trivial to design a quantum Fourier transform from scratch. This work proposes a quantum architecture search algorithm which is based on a Monte Carlo graph search and measures of importance sampling. It is applicable to the optimization of gate order, both for discrete gates, as well as gates containing continuous variables. Several numerical experiments demonstrate the applicability of the proposed method for the automatic discovery of quantum circuits.
Institute for Information Processing (tnt/L3S), Leibniz Universität Hannover, Germany
Institute of Theoretical Physics and L3S, Leibniz Universität Hannover, Appelstrasse 2, 30167 Hannover, Germany
Monte Carlo Graph Search for Quantum Circuit Optimization
Tobias J. Osborne
August 12, 2023
=========================================================
Quantum computing has brought about a paradigm shift in information processing and promises breakthroughs in the solution of industrial use cases <cit.>, physics <cit.>, medicine <cit.>, chemistry <cit.>, biology <cit.>, robotics <cit.>, general pattern recognition <cit.>, machine learning <cit.>, and much more. These opportunities are ameliorated, however, by the significant challenges arising in the discovery and application of quantum software. This is because the design of quantum algorithms still requires expert knowledge. Further, since quantum devices are still small and costly, it is essential to optimize resources, in particular, gate counts. Thus the discovery and optimization of quantum circuits is of significant near-term relevance.
Methods for the automated search for optimal quantum circuits have been investigated in the literature, and the term Quantum Architecture Search (QAS) has been adopted to describe this body of research. The name is borrowed and adapted from Neural Architecture Search (NAS) <cit.>, which is devoted to the study and hyperparameter tuning of neural networks. Recent works on QAS are often specific to a problem setup, e.g., it has been applied to quantum circuit structure learning <cit.>
for finding the ground states of Lithium Hydride
and the Heisenberg model in simulation, as well as for finding the ground state of a Hydrogen
gas. Many QAS-variants are focussed on discrete optimization and exploit optimization strategies for non-differentiable optimization criteria. Here
variants of Gibbs sampling
<cit.>, evolutional approaches <cit.>, genetic algorithms <cit.>, neural-network based predictors <cit.>, variants with noise-aware circuit learning <cit.>, and the optimization of approximate solutions <cit.> have been suggested. A recent survey on QAS can be found in <cit.>. Going beyond discrete optimisation it is also possible to exploit gradient-descent based optimization schemes <cit.> or reinforcement learning <cit.> for QAS.
The recent work <cit.> proposes a Monte Carlo Tree Search (MCTS) based on a multi-armed bandit formulation. This paper is closest in spirit to our present work, and the promising results and challenges identified there inform and inspire our investigation. In particular, since a tree cannot contain cycles, the rollout process (sometimes called simulation), can generate nodes and branches which are already part of the tree, leading to multiple identical circuits in the search. This is a consequence of the locality of quantum gate sets, with parallel operations creating cycles (see Fig. <ref> for an illustration).
In this paper we overcome the challenges presented by cycles in the quantum circuit graph by proposing the use of Monte Carlo Graph search (MCGS) <cit.> to optimize a combination of mixed discrete and continuous variables. This is possible since most gates containing continuous variables are smooth (e.g. consisting of rotation coefficients) and thus it is possible to automatically extract the Jacobians for a fast gradient descent while optimizing the quantum computation graph. Our contributions can be summarized as follows:
* We propose a Monte Carlo graph search algorithm for quantum architecture optimization.
* Our model allows for a joint discrete and continuous optimization of the quantum gate ordering and parameters.
* Several applications demonstrate its applicability, e.g., for the optimization of the quantum Fourier transform, diverse quantum cellular automata, and simple quantum machine learning tasks.
* Our source code for optimization will be made publicly available.
§ PRELIMINARIES
In this section we give a brief overview of the physical systems we discuss in the sequel, and provide a description of the architecture search optimization strategies that we compare and contrast. In particular, reference methods frequently used for discrete optimization are briefly introduced. They are later used for a direct comparison with our proposed MCGS algorithm.
We focus on the setting where our quantum information processing device is comprised of a set of N logical qubits, arranged as a quantum register (see, e.g., <cit.> for further details). Thus the Hilbert space of our system is furnished by ℋ≡ (ℂ^2)^⊗ N≅ℂ^2^N. In this way, e.g., a quantum state vector of a 5-qubit register is a unit vector in ℂ^32. We assume throughout that the system is not subject to decoherence and remains pure. (The extension of the results presented here to the mixed-state case will be the subject of a future investigation.)
Quantum gates are the basic building blocks of quantum circuits, similar to logic gates in digital circuits <cit.>. According to the axioms of quantum mechanics, quantum logic gates are represented by unitary matrices so that a gate acting on N qubits is represented by a 2^N× 2^N unitary matrix, and the set of all such gates together with the group operation of matrix multiplication furnishes the symmetry group U(2^N). In order to describe explicit matrix representations we exploit the computational basis {|x_1,x_2,…, x_N⟩ | x_j ∈{0,1}, j = 1, 2, …, N} furnished by the eigenstates of the Pauli Z operator on each qubit j.
Standard quantum gates include the Pauli-(X, Y, Z) operations, as well as Hadamard-, cnot-, swap-, phase-shift-, and toffoli-gates, all of which are expressible as standardised unitary matrices with respect to the computational basis. The action of a quantum gate is extended to a register of any size exploiting the tensor product operation in the standard way. Most gates do not involve additional variables, however, e.g., a phase-shift gate R_X(θ) applies a complex rotation and involves the rotation angle θ as free parameter. This parameter should then be jointly optimized together with the architecture of an overall quantum circuit.
A quantum circuit of length L is then described by an ordered tuple (O(1), O(2), …, O(L)) of quantum gates; the resulting unitary operation U implemented by the circuit is the product
U = O(L)O(L-1)⋯ O(1).
§.§ Genetic algorithms
A Genetic Algorithm (GA) belongs to the family of so-called evoluationary algorithms. A GA is a population-based metaheuristic inspired by biological evolution.
It comprises a fitness function to evaluate individuals of a population, a selection process (driven by the fitness scores) to decide which individuals are used for reproduction and genetic operators, such as crossovers, and mutations to generate new individuals. These new individuals form a new generation which is further evaluated in an ongoing evolution. Genetic algorithms are commonly exploited in discrete optimization and the interested reader is referred to <cit.> for further details.
For the numerical experiments carried out in this paper the fitness function is directly given by the optimization task (a loss or quality score). Each individual (quantum circuit) I_1 is represented by an ordered tuple of quantum gates, e.g., I_1=(O_1(1), …, O_1(n)).
For a crossover between two circuits I_1 and I_2, a point on both parents' chromosomes is randomly picked which is called the crossover point. Gates to the right of that point are swapped between the two parent chromosomes. This results in two children,
(O_1(1), …, O_1(j), O_2(j+1), …, O_2(n))
and
(O_2(1), …, O_2(j), O_1(j+1), …, O_1(n)),
each carrying some genetic information from both parents. A mutation is then furnished by a random exchange of a quantum gate. A recent work on evolutionary quantum architecture search for parametrized quantum circuits was presented in <cit.>.
§.§ Particle filter
Particle filtering uses a set of samples (which are then called particles) to model a posterior distribution of a stochastic process given some observations. A particle filter is also called a sequential Monte Carlo method <cit.>. These are Monte Carlo algorithms which are commonly used to find approximate solutions for filtering problems of nonlinear state-space systems. More recently they have been applied to quantum systems as Quantum Monte Carlo methods <cit.>. In the controls literature, particle filters are exploited to estimate the posterior distribution of the state x_t of a dynamical system at time t conditioned on the data,
p(x_t|z^t,u^t). This posterior is estimated via the following recursive formula
p(x_t|z^t,u^t) = η_t p(z_t|x_t)∫ p(x_t|u_t,x_t-1)
p(x_t-1|z^t-1,u^t-1) dx_t-1,
where η_t is a normalization constant.
Three probability distributions are required for such a particle filter: (1) a so-called measurement model, p(z_t|x_t), which gives the probability of measuring z_t when the system is in
state x_t; (2) a control model, p(x_t|u_t,x_t-1), which models the effect of a control u_t on the system state. It provides the probability that the system is in state x_t after
executing control u_t at state x_t-1; and (3) an initial state
distribution p(x_0) is required, to specify the user's knowledge about
the initial system state, see also <cit.>. In computer vision, the so-called condensation algorithm is a well-known example of how to perform a conditional density propagation for visual tracking <cit.>.
The implementation of a particle filter can be very similar to that of a genetic algorithm. It can be based on M independent random variables ξ_0^i, (i=1, …, M) with a probability density p(x_0). Based on the underlying distribution, e.g., representing a fitness score, M of these variables are selected ξ_k^i →ξ̂_k^i and diffused using a mutation-like operation, yielding a new set ξ_k+1^i.
§.§ Simulated annealing
Simulated Annealing (SA) is another probabilistic technique to approximating the optimum of a given function <cit.>. The name derives from annealing in metallurgy where the process involves heating and a controlled cooling of a material to change and control its physical properties.
As an optimization scheme the algorithm works iteratively with respect to time t given a state x_t.
At each step, the simulated annealing heuristic samples a neighboring state x̂_t of the current state x_t. Then a probabilistic decision is made to decide whether to move to the new state x_t+1=x̂_t or to remain in the former state x_t+1=x_t.
The probability of making the transition from the current state x_t to the new state x̂_t
is defined by an acceptance probability function
P(e(x_t), e(x̂_t), T). The function e(x) evaluates the energy of this state, which is in our case
the fitness score given by the optimization task (e.g. the ℓ_2-loss). The parameter T is a time-dependent variable dictating the behavior of the stochastic process according to a cooling scheme or annealing schedule. The P function is typically chosen in such a way that the probability of accepting an uphill move decreases with time and it decreases as the difference e(x̂_t)-e(x_t) increases.
Thus, a small increase in error is likely to be accepted so that local minima can be avoided, whereas a larger error increase is not likely to be accepted. A typical function for P takes the form
P(e(x_t), e(x̂_t), T) ∝exp(
-e(x̂_t)-e(x_t)/k T),
with k a damping factor k>0.
§.§ Monte Carlo tree search
Monte-Carlo tree search (MCTS) is a heuristic search algorithm for decision processes <cit.>. It makes use of random sampling and very efficiently balances the well-known exploration-exploitation dilemma in large search spaces. A typical example is provided by game states where non-promising game configurations are avoided, e.g., typical for board games such as chess or tic-tac-toe. MCTS is a common approach in reinforcement learning, typically in combination with deep reinforcement learning <cit.>. As it visits more interesting nodes more frequently, it grows asymmetrically and focusses the search time on more relevant parts of the tree.
Saffidine et al. <cit.> present a framework for testing various algorithms that deal with transpositions in MCTS. They call this framework Upper Confidence bound for Direct acyclic graphs (UCD) and apply this formalism to overcome the exploration-exploitation dilemma. Their search strategy in the DAG follows the Upper Confidence bounds for Trees (UCT) algorithm <cit.>. These predecessors have been more recently applied to Monte Carlo Graph Search to optimize game play in AlphaZero based reinforcement learning <cit.>.
§ PROBABILISTIC GRAPHICAL MODELS
In this section we describe the graphical model we exploit to characterize the search space of quantum circuits.
We assume throughout that we have a fixed set 𝒪𝒫={O_1, O_2, …} of elementary quantum gates that we are allowed to apply. Note that in 𝒪𝒫 the same unitary gate acting on different qubits is considered to be a different elementary gate. E.g., the Pauli-X operator acting on qubit 1, written here as X(1), is considered to be a different elementary gate to X(2), which is the Pauli-X operator acting on qubit 2. Starting with the identity operator 𝕀 we can build quantum circuits by selecting elementary gates from 𝒪𝒫 and multiplying from the left. (Thanks to the universality theorem <cit.> we know that we can approximate an arbitrary unitary to arbitrarily good accuracy with a sufficiently long product of such gates.)
We associate a vertex from a vertex set V to each quantum circuit built from a product of elementary gates from 𝒪𝒫. We connect with an edge two such vertices if the corresponding quantum circuit differs (on the left) by an elementary gate. In this way, a collection of quantum circuits is endowed with a graph structure, with vertices decorated by quantum circuits and edges weighted by elementary gates. In Fig. <ref> a tiny example graph for differently ordered quantum gates is depicted. The edges are labelled by possible gates of the quantum circuit. The nodes are decorated by the resulting unitary when concatenating the operations along the shortest path. Thus, each node is identified with a possible quantum circuit. It is important to note that this graph contains cycles since identical quantum circuits have multiple representations with different gates and gate orders.
In Fig. <ref> we illustrate a few steps of such a growing graph model. As the depth of the graph grows exponentially with the number of gates and nodes, it is computationally infeasible to precompute such a graph for all possible circuits. E.g., a tiny set of 20 elementary quantum gates can be assembled to build 20^5 combinations for quantum circuits of length 5. Thus it is not possible to evaluate all configurations in a feasible time to solve for a specific optimization task.
Since the graph model generated by the evaluation of quantum gates can have cycles, we apply a Monte Carlo search on the graph model with quantum gates as transitions. Thus, given a specific task, every node will receive a quality score which is used to compute a probability for the a selection of this node. Based on the random selection and already explored operations, a new operation is randomly selected to grow the graph. Once a solution is found, an efficient quantum circuit can be generated by computing the shortest path in the graph from the start node to the target node.
This strategy is formalised as follows: We have a set of vertices V – associated with quantum circuits – of a graph G=(V;E) with V={v_1, … v_n} and we build a probability function p(v_i), ∑_i p(v_i)=1, which assigns to each node of the graph a probability for selection. Poisson sampling is then exploited as the underlying sampling process. It is assumed that each vertex of the graph is an independent Bernoulli trial. Following standard mathematical conventions, the first-order inclusion probability of the ith element of the graph is denoted by the symbol π_i = p(v_i). We further associate to each vertex of the graph an underlying task specific quality score s_i≥ 0. (Here larger values of s_i imply better quality.) Accordingly, we compute the first-order inclusion probability via
π_i=s_i/∑_j s_j. This paradigm of
Monte Carlo Search
<cit.> and adapted Gibbs sampling <cit.> is used to iteratively grow a graph containing the effects of ordered quantum operations as trajectories in this graph, see Figure
<ref>.
Our MCGS procedure is now as follows: Given a set of vertices V={v_1, … v_n} and a probability function
p(v_i), Poisson sampling is used to select an existing node v_i from the graph. A quantum gate is then randomly selected from the set 𝒪𝒫 of elementary gates according to a uniform distribution and applied (from the left) to the quantum circuit associated with v_i. The result is a new circuit, which is either associated with an existing node or a new one [At this stage it may be convenient to introduce an approximation factor ϵ: Two circuits U and V are then considered to be equivalent if ||U-V||≤ϵ.]. If the circuit at a node already exists, e.g. v_j, a new edge (v_i, v_j) is added to the graph, decorated with edge weight given by the applied elementary gate. If the quantum circuit does not exist, the graph is extended by adding a new node v_N+1 and an edge (v_i, v_N+1).
Since the probabilities vary with respect to the quality score of the nodes, the graph grows asymmetrically. Once a node is reached which (sufficiently accurately) solves the optimization task, a shortest path from v_1 to the target node gives the shortest available quantum circuit, see also Figure <ref>. Figure <ref> summarizes the basic steps of the MCGS Algorithm.
Please note that such a graph can also be reused for different kinds of optimizations and it can be analyzed very generally to identify cycles, clusters and other structural properties on the effect of quantum gates.
§.§ Optimization of Continuous Variables
Several gates can contain continuous variables for optimization, e.g. phase shift gates
P(ϕ) = (
1 0
0 exp(i ϕ)
)
with corresponding Jacobians:
∂ P(ϕ)/∂ϕ = (
0 0
0 iexp(i ϕ)
)
Since the involved functions are smooth and differentiable,
the Jacobian of such a matrix and the Jacobian for a chain of operations is easy to compute via the product rule. Thus, given
a quantum circuit of length L which is described by an ordered tuple (O(1), O(2), …, O(L)) of quantum gates and Φ=(ϕ_1 …ϕ_k) continuous variables within this chain,
the resulting unitary operation is then a function
U(Φ)=U(ϕ_1, …, ϕ_k).
This function is typically used for the optimization of a loss function, for example the distance from a target matrix O, e.g., a DFT matrix. For numerical and implementation convenience the loss function was chosen to be the Frobenius norm between O and U(Φ),
L(Φ)= O - U(Φ) _F =
√(tr[(O-U(Φ))^†(O-U(Φ))] ).
Although the Frobenius norm does not have a simple operational interpretation, it is easy to compute both numerically and also experimentally via a swap test.
The Jacobian of the loss function is given by
∇_Φ L(Φ) =
[ ∂ L(Φ)/∂ϕ_1, … ,∂ L(Φ )/∂ϕ_k].
This vector can be numerically computed and used for optimization by using automatic differentiation.
Optimization of the involved parameters can be then carried out with a gradient descent iteration using
Φ^t+1 = Φ^t - η∇_Φ L(Φ).
Here, η denotes a damping factor which has been set to 0.2 for all our experiments.
§ EXPERIMENTS
Since the MCGS algorithm we describe here is capable of optimizing both discrete and mixed discrete and continuous settings, the experiments are divided in two parts. In the first part only results for discrete optimizations are presented. In the second part we also describe some experiments involving mixed discrete and continuous variables.
§.§ Discrete quantum architecture search
The first experiment we conducted targeted the optimization of the quantum Fourier transform. It is well known that the steps of the radix-2 FFT can be realised as a quantum circuit using primarily phase shift and Hadamard gates. E.g., for an 8-dimensional FFT, three qubits are sufficient and 7 gates can be used to compute the FFT matrix.
In the first experiment we compared the optimization of quantum circuits of length 1, 2, …, 7, for a given predefined set of elementary quantum gates. Note, for a database of elementary gates of size 32 and a circuit length of 7 there are 32^7 ∼ 3.5 × 10^10 different quantum circuits possible; the search space grows exponentially and is already infeasibly large (for exhaustive searches) for larger circuits. In Fig <ref> we present the comparison of four different baselines, based on a naive random sampling, a genetic algorithm (Sec 3.2), a particle filter (Sec. 3.3), and simulated annealing (Sec. 3.4), respectively, along with our proposed Monte Carlo Graph Search. The x-axis records the circuit length required for a predefined quantum circuit. The complexity of the optimization problem increasing exponentially with circuit length.
Note, that the y axis is scaled with the log function, so that a linear slope of indicates an exponential growth in complexity. The error bars depict the standard deviation. For this experiment, the amount of experiments/observations was set to 10. In comparison to the four baselines, our proposed Monte Carlo Graph Search requires far fewer samples to reach a solution. One explanation for this is that unnecessary samples (e.g., leading to cycles, etc.) are efficiently avoided.
In the next experiment we analyze the efficiency of the generated quantum circuits in terms of the amount of required gates. As the computation graph contains cycles, a valid question is, if standard sampling based approaches for QAS can lead to solutions which require more gates than necessary, resulting in inefficient code.
The approaches of Gibbs-Sampling, simulated annealing, and the proposed MCGS model allow one to optimize quantum circuits where the resulting code length is not fixed. Due to the diffusion and crossover steps, it is not clear how one can do this for the particle filter and the genetic algorithms, so these approaches were omitted for this experiment. In Figure <ref> the mean and standard deviation of the random sampling, simulated annealing, and MCGS for varying optimal circuit length tasks is shown. The x-axis is labelled by the optimal circuit length and the y-axis by the required circuit lengths (including standard deviation) of the different optimizers. The optimal graph is a straight line, which was achieved by our proposed MCGS. Thus, the MCGS always finds the optimal length, whereas the sampling and annealing schemes more often tend to find inefficient solutions. Thus, the proposed MCGS jointly ensures efficient models during optimization.
§.§ Quantum circuits for classical cellular automata
A cellular automaton (CA) is a mapping on a set of states of connected cells (e.g. arranged as a graph).
Each cell has a cell state (e.g. binary) and which changes according to a predefined set of rules given by the local neighbors of each cell. Simple and nontrivial cellular automata are furnished already for one-dimensional graphs, with two possible states per cell. The neighbors are defined as the direct adjacent cells on either side of the cell. In this setting the rules for changing the state of each cell can be defined as a mapping from a 3-dimensional binary state to a new binary state. Thus, there are 2^3=8 patterns for a neighborhood. There are then 2^8 = 256 possible combinations for rules which describe 256 different cellular automata. In general they are referred to by their Wolfram code and are called R-X automata, with X being a number between 0 and 255. Several papers have analyzed and compared these 256 cellular automata. The cellular automata defined by rule 30, rule 90, rule 110, and rule 184, are particularly interesting and are given by the mappings
Rule 30 exhibits so-called class 3 behavior. This means that even simple input patterns lead to chaotic, and seemingly random histories. When starting from a single live cell, Rule 90 produces a spacetime diagram resembling the Sierpinski triangle. Rule 110, similar to Game of Life, exhibits what is called class 4 behavior, which means it is neither completely random nor completely repetitive. Finally, Rule 184 is notable for solving the majority problem as well as for its ability to simultaneously describe several, seemingly quite different, particle systems. For examples Rule 184 can be used as a simple model for traffic flow.
Thus, several of these transition rules exhibit interesting aspects for researchers in mathematics and optimization, as well as biology and physics. Due to the simple definition of these transition rules, and their consequent rich behaviour, these CA provide a diverse and interesting class of targets for quantum circuits encoding.
Fig. <ref> visualizes the basic concept. Given an 8-dimensional one-hot encoded input vector (encoding the 3 input binary elements), the quantum circuit is applied to the initial vector |000⟩. The resulting probability indicates if the new state is zero or one.
Fig. <ref> shows example realizations of the cellular automata 30, 90, 110 and 184. The images show the evolution of the pattern over time (from top to bottom). The two examples at the left side of the figure have been initialized with a zero vector and a one entry at the middle, the examples at the right side have been initialized with a random binary pattern.
Fig. <ref> shows quantum circuits for all possible 8-entry vectors corresponding to the 256 Wolfram Codes which have been optimized using our quantum architecture search algorithm.
§.§ Mixed Discrete-Continuous Quantum Architecture Search
Our approach exploiting the Monte Carlo Graph Search can also be easily extended to optimize circuits involving a combination of discrete and continuous variables, as outlined before. In the following experiments we use quantum architectures to solve simple machine learning tasks.
For the experiments, the classical wine, zoo and iris datasets were used. The datasets present multicriterial classification tasks, with three categories for the wine dataset, seven categories for the zoo dataset, and three for the iris dataset. The datasets are all available at the UCI repository <cit.>.
To model a classification task using a quantum circuit, first the data is encoded as a higher-dimensional binary vector.
Taking the iris dataset as a toy example, it consists of 4 dimensional data encoding sepal length, sepal width, petal length and petal width.
After separating training and test data, a kMeans clustering on each dimension with k=3 is used on the training data. Thus, every datapoint can be encoded in a 4
× 3=12-dimensional binary vector which contains exactly 4 non-zero entries. For the given cluster centers, the same can be done with the test data. Thus, a binary encoding is used to represent the datasets.
Table <ref> summarizes the used datasets, the amount of features (the dimension of each sample), its binary dimensionality, the used qubits to represent the problem as well as the amount of training data, the amount of target classes and the gained accuracy with the optimized quantum model.
Similar to Fig. <ref>, a part of the quantum register is used to encode the probability of a classification label. The final decision is then based on the highest probability.
Figure <ref> shows example outcomes of the optimized quantum codes for the wine and iris dataset. Note, that the results can vary considerably. This depends on the random selection of training and test data and the random process of the graph generation. Table <ref> summarizes the three used datasets and the overall performance. Note, that the overall quality is similar to the the results obtained with decision trees or shallow neural networks.
§ SUMMARY
In this paper we have proposed a quantum architecture search algorithm based on Monte Carlo graph search and measures of importance sampling. Each trajectory in this graph leads to a quantum circuit which can be evaluated according to whether it achieves a specific task. Our model also allows for the optimization of mixed discrete and continuous gates and several experiments demonstrate the applicability for different tasks, such as matrix factorization, producing cellular automata vectors, and simple machine learning models. A comparison with classical approaches such as greedy sampling, genetic algorithms, particle filter or simulated annealing was carried out and demonstrates that the graph model performs more efficiently, since cycles and redundancies are explicitly avoided. The shortest path from the start node to the target node provides efficient algorithms in terms of circuit length. A future challenge is to overcome the computation time required by the graph model as the number of nodes is increased. One immediate and relevant next step is to generalize the method described here to apply to mixed states and completely positive maps.
§.§ Acknowledgments
This work was supported, in part, by the Quantum Valley Lower Saxony, the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)–Project-ID 274200144–SFB1227, under Germanys Excellence Strategies EXC-2123 QuantumFrontiers and EXC-2122 PhoenixD.
47
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Rietsche et al.(2022)Rietsche, Dremel, Bosch, Steinacker, Meckel, and Leimeister]ls_leimeister
author author R. Rietsche, author C. Dremel,
author S. Bosch, author L. Steinacker, author
M. Meckel, and author
J. M. Leimeister, @noop
journal journal Electronic Markets (year 2022)NoStop
[Preskill(2018)]preskill2018quantum
author author J. Preskill, @noop journal journal
Quantum volume 2, pages 79 (year 2018)NoStop
[Endo et al.(2018)Endo,
Benjamin, and Li]endo2018practical
author author S. Endo, author S. C. Benjamin,
and author Y. Li, @noop
journal journal Physical Review X volume 8, pages 031027 (year
2018)NoStop
[Jordan et al.(2012)Jordan,
Lee, and Preskill]jordan2012quantum
author author S. P. Jordan, author K. S. Lee, and author J. Preskill, @noop journal journal Science volume 336, pages 1130 (year
2012)NoStop
[Bargassa et al.(2021)Bargassa, Cabos, Choi, Hessel, and Cavinato]bargassa2021quantum
author author P. Bargassa, author T. Cabos,
author A. C. O. Choi, author T. Hessel, and author
S. Cavinato, @noop journal journal Physical Review D volume 104, pages 096004 (year
2021)NoStop
[Davids et al.(2022)Davids,
Lidströmer, and Ashrafian]Davids2022
author author J. Davids, author N. Lidströmer, and author H. Ashrafian, title Artificial intelligence in
medicine using quantum computing in the future of healthcare, in @noop booktitle Artificial Intelligence in
Medicine, editor edited by editor N. Lidströmer and editor H. Ashrafian (publisher Springer
International Publishing, address Cham, year
2022) pp. pages 423–446NoStop
[Cao et al.(2019)Cao,
Romero, Olson, Degroote,
Johnson, Kieferová, Kivlichan, Menke, Peropadre, Sawaya et al.]cao2019quantum
author author Y. Cao, author J. Romero,
author J. P. Olson, author M. Degroote, author
P. D. Johnson, author
M. Kieferová, author
I. D. Kivlichan, author
T. Menke, author B. Peropadre, author N. P. Sawaya, et al., @noop journal journal Chemical reviews volume 119, pages 10856 (year
2019)NoStop
[Marx(2021)]marx2021biology
author author V. Marx, @noop journal journal Nature
Methods volume 18, pages 715
(year 2021)NoStop
[Mannone et al.(2023)Mannone, Seidita, and Chella]mannone2023modeling
author author M. Mannone, author V. Seidita, and author A. Chella, @noop journal journal Swarm and
Evolutionary Computation volume 79, pages 101297 (year 2023)NoStop
[Bapst et al.(2020)Bapst,
Bhimji, Calafiura, Gray,
Lavrijsen, and Linder]bapst2020pattern
author author F. Bapst, author W. Bhimji,
author P. Calafiura, author H. Gray, author
W. Lavrijsen, and author
L. Linder, @noop journal journal Computing and Software for Big Science volume 4 (year 2020)NoStop
[Mott et al.(2017)Mott,
Job, Vlimant, Lidar, and Spiropulu]mott2017solving
author author A. Mott, author J. Job, author J.-R. Vlimant, author
D. Lidar, and author
M. Spiropulu, @noop journal journal Nature volume 550, pages 375 (year 2017)NoStop
[Wu et al.(2021)Wu,
Chan, Guan, Sun,
Wang, Zhou, Livny,
Carminati, Di Meglio, Li
et al.]wu2021application
author author S. L. Wu, author J. Chan, author W. Guan, author
S. Sun, author A. Wang, author C. Zhou, author M. Livny,
author F. Carminati, author A. Di Meglio, author
A. C. Li, et al., @noop
journal journal Journal of Physics G: Nuclear and
Particle Physics volume 48, pages
125003 (year 2021)NoStop
[Willsch et al.(2020)Willsch, Willsch, De Raedt, and Michielsen]WILLSCH2020107006
author author D. Willsch, author M. Willsch,
author H. De Raedt, and author K. Michielsen, @noop journal journal Computer Physics
Communications volume 248, pages
107006 (year 2020)NoStop
[Miikkulainen(2020)]Miikkulainen2020
author author R. Miikkulainen, title Neuroevolution, in @noop booktitle Encyclopedia of Machine Learning
and Data Science, editor edited by editor
D. Phung, editor G. I. Webb, and editor C. Sammut (publisher Springer US, address New York, NY, year 2020) pp. pages
1–8NoStop
[Xie et al.(2019)Xie,
Zheng, Liu, and Lin]xie2018snas
author author S. Xie, author H. Zheng, author C. Liu, and author
L. Lin, in https://openreview.net/forum?id=rylqooRqK7 booktitle
International Conference on Learning Representations (year
2019)NoStop
[Ostaszewski et al.(2021)Ostaszewski, Grant, and Benedetti]Ostaszewski2021structure
author author M. Ostaszewski, author E. Grant,
and author M. Benedetti, @noop journal journal Quantum volume 5, pages 391 (year
2021)NoStop
[Li et al.(2020)Li,
Fan, Coram, Riley, and Leichenauer]PhysRevResearchLi20
author author L. Li, author M. Fan, author M. Coram, author
P. Riley, and author
S. Leichenauer, @noop journal journal Phys. Rev. Res. volume
2, pages 023074 (year 2020)NoStop
[Franken et al.(2022)Franken, Georgiev, Mucke, Wolter, Heese, Bauckhage, and Piatkowski]9870269
author author L. Franken, author B. Georgiev,
author S. Mucke, author M. Wolter, author
R. Heese, author C. Bauckhage, and author N. Piatkowski, in @noop booktitle 2022 IEEE Congress on Evolutionary Computation (CEC) (year 2022) pp. pages 1–8NoStop
[Rasconi and Oddi(2019)]RasconiOddi2019
author author R. Rasconi and author A. Oddi, @noop journal journal Proceedings of the
AAAI Conference on Artificial Intelligence volume
33, pages 7707 (year 2019)NoStop
[Las Heras et al.(2016)Las Heras, Alvarez-Rodriguez, Solano, and Sanz]PhysRevLett116.230504
author author U. Las Heras, author U. Alvarez-Rodriguez, author E. Solano, and author M. Sanz, @noop journal journal Phys.
Rev. Lett. volume 116, pages 230504
(year 2016)NoStop
[Zhang et al.(2021)Zhang,
Hsieh, Zhang, and Yao]Zhang2021
author author S.-X. Zhang, author C.-Y. Hsieh,
author S. Zhang, and author H. Yao, 10.1088/2632-2153/ac28dd journal journal Machine
Learning: Science and Technology volume 2, pages 045027 (year 2021)NoStop
[Cincio et al.(2021)Cincio,
Rudinger, Sarovar, and Coles]PRXQuantum21
author author L. Cincio, author K. Rudinger,
author M. Sarovar, and author P. J. Coles, @noop
journal journal PRXQuantum21 volume 2, pages 010324 (year
2021)NoStop
[Zhou et al.(2020)Zhou,
Wang, Choi, Pichler, and Lukin]PhysRevX10021067
author author L. Zhou, author S.-T. Wang,
author S. Choi, author
H. Pichler, and author
M. D. Lukin, @noop journal journal Phys. Rev. X volume
10, pages 021067 (year 2020)NoStop
[Zhu et al.(2023)Zhu,
Pi, and Peng]Zhu23ICACS
author author W. Zhu, author J. Pi, and author Q. Peng, in @noop
booktitle Proceedings of the 6th International
Conference on Algorithms, Computing and Systems, series and
number ICACS '22 (publisher Association for Computing
Machinery, address New York, NY, USA, year
2023)NoStop
[Watabe et al.(2021)Watabe,
Shiba, Chen, Sogabe,
Sakamoto, and Sogabe]quantum3020021
author author M. Watabe, author K. Shiba,
author C.-C. Chen, author M. Sogabe, author
K. Sakamoto, and author
T. Sogabe, @noop journal journal Quantum Reports volume
3, pages 333 (year 2021)NoStop
[Zhang et al.(2022)Zhang,
Hsieh, Zhang, and Yao]Zhang2022
author author S.-X. Zhang, author C.-Y. Hsieh,
author S. Zhang, and author H. Yao, @noop journal journal Quantum Science and Technology volume 7, pages 045023 (year
2022)NoStop
[Wauters et al.(2020)Wauters, Panizon, Mbeng, and Santoro]PhysRevResearch.2.033446
author author M. M. Wauters, author E. Panizon,
author G. B. Mbeng, and author G. E. Santoro, @noop journal journal Phys. Rev. Res. volume 2, pages 033446 (year
2020)NoStop
[Wang et al.(2023)Wang,
Usman, Parampalli, Hollenberg, and Myers]WangAQC23
author author P. Wang, author M. Usman,
author U. Parampalli, author L. C. L. Hollenberg, and author C. R. Myers, 10.1109/TQE.2023.3265709 journal journal IEEE Transactions on Quantum Engineering , pages
1 (year 2023)NoStop
[Fu(2018)]8632344
author author M. C. Fu, in @noop booktitle 2018 Winter
Simulation Conference (WSC) (year 2018) pp. pages 222–236NoStop
[Kaye et al.(2007)Kaye,
Laflamme, and Mosca]10.5555/1206629
author author P. Kaye, author R. Laflamme, and author M. Mosca, @noop title An Introduction to Quantum
Computing (publisher Oxford University Press, Inc., address USA, year 2007)NoStop
[Nielsen and Chuang(2000)]nielsenQuantumComputationQuantum2000
author author M. A. Nielsen and author I. L. Chuang, @noop title Quantum Computation and
Quantum Information (publisher Cambridge University
Press, address Cambridge, year
2000)NoStop
[Selinger(2004)]Selinger2004a
author author P. Selinger, 10.1017/S0960129504004256 journal journal Mathematical Structures in Computer Science volume 14, pages 527 (year
2004)NoStop
[Eiben and Smith(2003)]rug01Eiben
author author A. E. Eiben and author J. E. Smith, @noop title Introduction to
evolutionary computing (publisher New York (N.Y.) :
Springer, year 2003)NoStop
[Ding and Spector(2022)]DingGecco22
author author L. Ding and author L. Spector (publisher Association for Computing Machinery, address New York, NY, USA, year 2022) p. pages
2190–2195NoStop
[Wills and Schön(2023)]wills2023sequential
author author A. G. Wills and author T. B. Schön, @noop journal journal
Annual Review of Control, Robotics, and Autonomous Systems volume 6 (year 2023)NoStop
[Gubernatis et al.(2016)Gubernatis, Kawashima, and Werner]gubernatis_kwerner_2016
author author J. Gubernatis, author N. Kawashima, and author P. Werner, 10.1017/CBO9780511902581 title Quantum Monte Carlo Methods: Algorithms for Lattice Models (publisher Cambridge University Press, year
2016)NoStop
[Thrun et al.(2001)Thrun,
Langford, and Verma]NIPS2001_Thrun
author author S. Thrun, author J. Langford, and author V. Verma, in @noop booktitle Advances in Neural Information
Processing Systems, Vol. volume 14, editor
edited by editor T. Dietterich,
editor S. Becker, and editor Z. Ghahramani (publisher MIT Press, year 2001)NoStop
[Blake and Isard(1996)]NIPS1996_0829424f
author author A. Blake and author M. Isard, in @noop booktitle Advances in Neural
Information Processing Systems, Vol. volume 9, editor edited by editor M. Mozer, editor M. Jordan, and editor T. Petsche (publisher MIT Press, year 1996)NoStop
[Kirkpatrick et al.(1983)Kirkpatrick, Gelatt, and Vecchi]Kirkpatrick1983
author author S. Kirkpatrick, author C. D. Gelatt, and author M. P. Vecchi, 10.1126/science.220.4598.671 journal journal Science series New Series, volume 220, pages 671 (year
1983)NoStop
[Silver et al.(2016)Silver,
Huang, Maddison, Guez,
Sifre, van den Driessche, Schrittwieser, Antonoglou, Panneershelvam,
Lanctot, Dieleman, Grewe,
Nham, Kalchbrenner, Sutskever, Lillicrap, Leach, Kavukcuoglu, Graepel, and Hassabis]Silver2016
author author D. Silver, author A. Huang,
author C. J. Maddison, author A. Guez, author
L. Sifre, author G. van den Driessche, author J. Schrittwieser, author I. Antonoglou, author V. Panneershelvam, author M. Lanctot, author S. Dieleman, author D. Grewe, author J. Nham, author N. Kalchbrenner, author I. Sutskever, author T. Lillicrap, author M. Leach,
author K. Kavukcuoglu, author T. Graepel, and author D. Hassabis, 10.1038/nature16961 journal journal Nature volume 529, pages 484 (year
2016)NoStop
[Saffidine et al.(2012)Saffidine, Cazenave, and Méhat]SAFFIDINE201226
author author A. Saffidine, author T. Cazenave,
and author J. Méhat, https://doi.org/10.1016/j.knosys.2011.11.014 journal
journal Knowledge-Based Systems volume
34, pages 26 (year 2012), note a
Special Issue on Artificial Intelligence in Computer Games: AICGNoStop
[Auer et al.(2002)Auer,
Cesa-Bianchi, and Fischer]Auer2002
author author P. Auer, author N. Cesa-Bianchi,
and author P. Fischer, 10.1023/A:1013689704352 journal journal
Machine Learning volume 47, pages
235 (year 2002)NoStop
[Czech et al.(2021)Czech,
Korus, and Kersting]Czech_Korus_Kersting_2021
author author J. Czech, author P. Korus, and author K. Kersting, 10.1609/icaps.v31i1.15952 volume 31, pages 103 (year 2021)NoStop
[Metropolis and Ulam(1949)]doi:10.1080/01621459.1949.10483310
author author N. Metropolis and author S. Ulam, @noop journal journal Journal
of the American Statistical Association volume 44, pages 335 (year 1949)NoStop
[geo(1993)]george1993variable
http://dx.doi.org/10.1080/01621459.1993.10476353 journal journal Journal of the American Statistical
Association volume 88, pages 881
(year 1993)NoStop
[Note1()]Note1
note At this stage it may be convenient to introduce an
approximation factor ϵ: Two circuits U and V are then
considered to be equivalent if ||U-V||≤ϵ.Stop
[Dua and Graff(2017)]Dua:2019
author author D. Dua and author C. Graff (publisher http://archive.ics.uci.edu/ml, year
2017)NoStop
|
http://arxiv.org/abs/2307.06123v1 | 20230712122347 | SoK: Comparing Different Membership Inference Attacks with a Comprehensive Benchmark | [
"Jun Niu",
"Xiaoyan Zhu",
"Moxuan Zeng",
"Ge Zhang",
"Qingyang Zhao",
"Chunhui Huang",
"Yangming Zhang",
"Suyu An",
"Yangzhong Wang",
"Xinghui Yue",
"Zhipeng He",
"Weihao Guo",
"Kuo Shen",
"Peng Liu",
"Yulong Shen",
"Xiaohong Jiang",
"Jianfeng Ma",
"Yuqing Zhang"
] | cs.CR | [
"cs.CR",
"cs.LG"
] |
SoK: Comparing Different Membership Inference Attacks with a Comprehensive Benchmark
Jun Niu1,
Xiaoyan Zhu⋆ Xiaoyan Zhu and Yuqing Zhang are the corresponding authors.1,
Moxuan Zeng2,
Ge Zhang1, Qingyang Zhao1, Chunhui Huang2,
Yangming Zhang2, Suyu An2, Yangzhong Wang2, Xinghui Yue3, Zhipeng He4, Weihao Guo2,
Kuo Shen1, Peng Liu5, Yulong Shen1, Xiaohong Jiang6, Jianfeng Ma1, Yuqing Zhang71
1Xidian University, {niujun,21151213588}@stu.xidian.edu.cn,{xyzhu, ylshen, jfma}@mail.xidian.edu.cn, [email protected], [email protected]
2Hainan University, [email protected], {huangch,zhangym,ansy,wangyz21,guowh}@nipc.org.cn
3Yanshan University, [email protected]
4Xi'an University of Posts & Telecommunications, [email protected]
5Pennsylvania State University, [email protected]
6Future University of Hakodate, [email protected]
7University of Chinese Academy of Sciences, [email protected]
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Membership inference (MI) attacks threaten user privacy through determining if a given data example has been used to train a target model. However, it has been increasingly recognized that the “comparing different MI attacks” methodology used in the existing works has serious limitations. Due to these limitations, we found (through the experiments in this work) that some comparison results reported in the literature are quite misleading. In this paper,
we seek to develop a comprehensive benchmark for comparing
different MI attacks,
called MIBench, which consists not only the evaluation metrics, but also the evaluation scenarios. And we design the evaluation scenarios
from four perspectives: the distance distribution of data samples in the target dataset, the distance between data samples of the target dataset, the differential distance between two datasets (i.e., the target dataset and a generated dataset with only nonmembers), and the ratio of the samples that are made no inferences by an MI attack. The evaluation metrics consist of ten typical evaluation metrics. We have identified three principles for the proposed “comparing different
MI attacks” methodology, and we have designed and implemented
the MIBench benchmark with 84 evaluation scenarios for each dataset. In total, we have used our benchmark to fairly and systematically
compare 15 state-of-the-art MI attack algorithms across 588 evaluation scenarios, and these evaluation scenarios cover 7 widely used datasets and 7 representative types of models. All codes and evaluations of MIBench are publicly available at <https://github.com/MIBench/MIBench.github.io/blob/main/README.md>.
§ INTRODUCTION
Recently, machine learning (ML), especially deep learning (DL), has achieved tremendous progress in various domains such as image recognition <cit.>, speech recognition <cit.>,
natural language processing <cit.> and medical analysis <cit.>.
However, many researches have demonstrated that ML models are vulnerable to various attacks, such as
property
inference attacks <cit.>, adversarial attacks <cit.>, model poisoning attacks <cit.> and membership inference attacks <cit.>.
Among these attacks, Membership Inference (MI) attacks <cit.> determine whether a data example is inside the training dataset of the target model or not. The workflows of the existing MI attacks are summarized in Figure <ref>: although all the existing MI attack algorithms have the same inputs, indiscriminate attack algorithms (e.g., <cit.>) and fine-grained attack algorithms (e.g., <cit.>) could have different (kinds of) outputs. In particular, every attack algorithm has two inputs: (1) the target dataset, which holds the set of given to-be-inferred data examples; (2) the outputs of the target model against each given data example.
For indiscriminate attack algorithms, their output is binary, telling whether a given data example is a member or not. In contrast, fine-grained attack algorithms firstly make (fine-grained) Yes/No decisions on making an inference or not.
When a No decision is made on data example x, the output
will be “no inference is made on x” (e.g., Data Example 3 showed in Figure <ref>).
Due to the importance of MI attacks, researchers have proposed many MI attacks in recent years. However, it has been increasingly recognized that the “comparing different MI attacks” methodology used in the existing works has serious limitations. Due to these limitations, we found (through the experiments in this work)
that some comparison results reported in the literature are quite misleading.
To illustrate the serious limitations, let's summarize the main factors determining when an MI attack is more effective and when it is less effective.
First of all, two primary factors have been recognized in all the existing works: the kind of data
in the target dataset; and the type of the target model. In order to incorporate these two factors in
evaluating attack effectiveness, all the existing works use multiple target datasets and multiple types (e.g., MLP, ResNet, DenseNet) of target models.
Second, based on the experiment results which we will shortly present in Section <ref>, we found the following factors.
Factor 1. It is found in our experiments (see Section <ref>)
that the effectiveness of
an MI attack could be highly sensitive to the distance distribution of data samples in the target dataset.
Factor 2. It is found in our experiments, which we will shortly present in Section <ref>,
that the effectiveness of
an MI attack could be highly sensitive to the distances
between
data samples of the target dataset.
The larger the distance between data samples of the target dataset,
the higher the attacker's membership advantage (MA).
By “membership advantage”, we mean the difference between an MI attack’s true and false positive rates (e.g., MA = TPR-FPR <cit.>).
In addition, we found that
the existing state-of-the-art MI attacks can achieve high inference accuracy
and low FPR (false positive rate) simultaneously when the distances between
data samples of the target dataset are carefully controlled.
Factor 3. It is found in our experiments (see Section <ref>) that the effectiveness of
an MI attack could be very sensitive to the differential distance (i.e., Maximum Mean Discrepancy (MMD) <cit.>)
before and after a data example is moved from the target dataset
to a generated dataset with only nonmembers.
In addition, we found that in general Factor 3 has
greater influence (on attack effectiveness) than Factor 2.
Factor 4. Since fine-grained attack algorithms make no inferences
on certain data examples, it seems unfair to use the same number of
data examples when comparing indiscriminate attack algorithms and
fine-grained attack algorithms.
These four factors clearly indicate that a
“comparing different MI attacks” methodology will not be convincing
unless the following requirements are met.
(R1) A set of target datasets following
different representative distance distributions should be used when
comparing different MI attacks. Attack A could be more effective
than Attack B when the target dataset follows one distribution
but less effective under another distribution.
(R2) A set of target datasets having different
distances between data samples should be used when comparing different MI attacks.
(R3) A set of target datasets having different
differential distances should be used when comparing different MI attacks.
(R4) When comparing an indiscriminate attack algorithm and
a fine-grained attack algorithm, the target datasets used for
evaluating the first algorithm should hold a smaller number of data examples.
The last 4 columns in Table <ref> summarize whether these four requirements
are satisfied in thirteen most representative existing works on MI attacks. Unfortunately, we find
that (a) 8 out of the 15 existing works fail to meet any of the four requirements;
(b) 7 out of the 15 existing works only manage to meet one of the four requirements.
In particular, none of the 15 existing works meets requirement R1;
none of the 15 existing works meets requirement R2;
12 out of the 15 existing works don't meet requirement R3.
In this work, we seek to develop a comprehensive benchmark for comparing
different MI attacks. The primary design goal of our benchmark, called MIBench, is
to meet all of these 4 requirements. In order to achieve this goal, our key insights are as follows.
Insight A. Although it is widely recognized in the existing works
that evaluation metrics (e.g., the “membership advantage” metric) play an essential
role in comparing different MI attacks, we find that even if the employed metrics are
appropriate, the comparison results can still suffer from poor explanations.
For example, researchers have already noticed the following:
(a1) one MI attack algorithm could suffer from fairly different evaluation
results (i.e., metric measurements) against different target datasets: it is
usually difficult to explain the lack of coherence cross datasets.
(a2) MI attack A is more effective than attack B against target dataset D1, but attack B is
more effective than attack A against target dataset D2. In such situations, it
is usually very difficult to draw general conclusions on when attack A would be
more effective and when attack B would be more effective.
(a3) Many MI attacks suffer from low precision against widely-used test datasets.
For example, BlindMI-Diff <cit.> suffers from a precision of 50.01%.
However, it is usually difficult to explain why higher precision is not achieved.
Insight B. We find that the above-mentioned four factors can
be used to significantly improve the explainability of a “comparing different MI attacks”
methodology. For example, the four factors in many cases enable one to
draw such general conclusions as “attack A is more effective than attack B
when the following two conditions are simultaneously met: (c1)
the distance distribution of data samples in a target dataset follows normal distribution; (c2) the
distances between data samples of the target dataset are 6.000.”
Based on these insights, our benchmark consists not only the evaluation metrics, but also
the evaluation scenarios. Based on the above-mentioned four factors, we design the evaluation scenarios
from four perspectives: the distance distribution of data samples in the target dataset, the distance between data samples of the target dataset, the differential distance between two datasets (i.e., the target dataset and a generated dataset with only nonmembers),
and the ratio of the samples that are made no inferences by an MI attack.
Moreover, each perspective corresponds to a control variable and so
there are in total four control variables in our evaluation scenarios.
The evaluation metrics consist of ten evaluation metrics
(i.e., accuracy, FPR, TPR, precision, f1-score, FNR, MA, AUC, TPR @ fixed (low) FPR, Threshold at maximum MA): they are
widely used in the existing works on
MI attacks (see Section <ref>) .
Contributions. The main contribution of this work are as follows.
First, we have identified three principles for the proposed “comparing different
MI attacks” methodology.
Second, following these principles, we have designed and implemented
the MIBench benchmark with 84 evaluation scenarios for each dataset.
For each evaluation scenario, we have designed the particular values of the four
above-mentioned control variables; and we have created the
corresponding target datasets through customizing
seven widely-used datasets in the research area of MI attacks.
(It should be noticed that all the evaluation scenarios
use the same set of evaluation metrics.)
Third, we have used our benchmark to systematically
compare 15 state-of-the-art MI attack algorithms.
For this purpose, we have downloaded the code of the following 15 MI attacks from GitHub: NN_attack <cit.>, Loss-Threshold <cit.>, Label-only <cit.>, Top3-NN attack<cit.>, Top1-Threshold <cit.>, BlindMI-Diff-w <cit.>, BlindMI-Diff-w/o <cit.>, BlindMI-1CLASS <cit.>, Top2+True <cit.>, Privacy Risk Scores <cit.>, Shapley Values <cit.>, Positive Predictive Value <cit.>, Calibrated Score <cit.>,
Distillation-based <cit.>,
Likelihood ratio attack <cit.>.
And we have conducted comparative evaluations of the 15 MI attacks
using the above-mentioned evaluation scenarios.
In total, we used 588 evaluation scenarios to fairly and systematically
compare the existing MI attacks. These evaluation scenarios cover 7 datasets, which
are widely used in the existing works on MI attacks, and 7 representative types of models.
§ BACKGROUND
§.§ Membership Inference Attacks
A target ML model 𝔽_T is described as a ML model trained on a certain target training dataset 𝒟_train. And the MI attacks aim to determine whether a data point x is in the target training dataset 𝒟_train. More generally, given a data point x_T, extra knowledge of the adversary 𝒦, a targeted machine learning model 𝔽_T, and the workflow of an MI attack (called the attack model 𝔸𝕄) is defined as the following. 𝔸𝕄 : {𝔽_T(x_T), 𝒦}⇒{0,1}, here the MI attack model 𝔸𝕄 is a binary classifier and 1 means the target data point x_T is a member of the 𝒟_train and 0
is a non-member (inside “T” means a target model or data point).
§.§ Threat Model
Black-box attack. The attackers only know the outputs of the target model (e.g., confidence score vectors) and mainly utilize them for training a binary classifier to determine the membership of a data sample.
Gray-box attack. The attackers not only control the certain target models' inputs and outputs, but know the distributions of training data trained the target model.
White-box attack. The attackers know all information of the target model and training data.
§ MOTIVATION AND OVERVIEW
§.§ Motivation
The proposed benchmark is motivated by the following observations. Observation 1. We find that even if the
employed metrics are appropriate, the comparison results can
still suffer from poor explanations. For example, researchers
have already noticed the following: (a1) One MI attack algorithm
could suffer from fairly different evaluation results (i.e.,
metric measurements) against different target datasets: it is
usually difficult to explain the lack of coherence cross datasets.
In many cases, although the different target datasets have different
data semantics (e.g., CIFAR100 holds 100 kind of data, while
CH-MNIST holds 8 kind of data), strong correlation between
the data semantics and the evaluation results is hard to identify. (a2) MI attack A is more effective than attack B against target
dataset D1, but attack B is more effective than attack A against
target dataset D2. In such situations, it is usually very difficult
to draw general conclusions on when attack A would be more
effective and when attack B would be more effective.
For example, the NN_attack<cit.> (e.g., MA=34.51%) is more effective than Label-only <cit.> attack (e.g., MA=32.77%) against CIFAR10, but the Label-only <cit.> attack (e.g., MA=70.6%) is more effective than NN_attack<cit.> (e.g., MA=58.54%) against CIFAR100.
(a3) Many MI attacks suffer from low precision (e.g., NN_attack<cit.> (51.7%), Label-only <cit.> (50.5%) and BlindMI-Diff <cit.>) (50.01%)) against widely-used
test datasets. However, it is usually difficult to explain why higher precision is not achieved.
Observation 2. We find that although all the existing works pay adequate attention to two primary factors determining when an MI attack is more effective, i.e., the semantics of data examples
in the target dataset and the type of the target model,
the existing works pay inadequate attention to three additional main factors. (Factor 1) We find that the effectiveness of an MI attack could be highly sensitive to the distance distribution of data samples in the target dataset. (Factor 2) We find that the effectiveness of
an MI attack could be highly sensitive to the distance between data samples of the target dataset. (Factor 3) We find that the effectiveness of
an MI attack could be very sensitive to the differential distance (i.e., MMD <cit.>)
before and after a data example is moved from the target dataset
to a generated dataset with only nonmembers. For example, (b1) without incorporating Factor 1, the comparasion evaluation results in <cit.> are difficult to
provide insightful explannations on low precision (e.g., NN_attack<cit.> (51.7%), Label-only <cit.> (50.5%) and BlindMI-Diff <cit.> (50.01%)) or high FPR <cit.>.
(b2) Without incorporating Factor 2, the comparasion evaluation results
in <cit.> are difficult to explain
why the higher precision is not achieved. (b3) Without incorporating Factor 3, the comparasion evaluation results
in <cit.> are difficult to explain
why the low FPR and higher accuracy are not achieved simultaneously.
Based on above key observations, we are strongly motivated to
incorporate the above-mentioned factors into a new benchmark
and use the benchmark to fairly and systematically
compare different MI attacks. In addition,
we are strongly motivated to develop a new
“comparing different MI attacks” methodology which
has significantly improved ability to explain the comparative evaluation results.
§.§ Overview
To alleviate the dilemma of the existing MI attacks,
in this work, we seek to develop a comprehensive benchmark for comparing
different MI attacks. The primary design goal of our benchmark, called MIBench (Section <ref>), is
to meet all of above-mentioned 4 requirements (see Section <ref>). Our benchmark consists not only the evaluation metrics(Section <ref>), but also
the evaluation scenarios (Section <ref>). Based on the above-mentioned four factors (see Section <ref>), we design the evaluation scenarios
from four perspectives: the distance distribution of data samples in the target dataset, the distance between data samples of the target dataset, the differential distance between two datasets (i.e., the target dataset and a generated dataset with only nonmembers),
and the ratio of the samples that are made no inferences by an MI attack.
Moreover, each perspective corresponds to a control variable and so
there are in total four control variables in our evaluation scenarios.
The evaluation metrics consist of ten widely used evaluation metrics
(i.e., accuracy, FPR, TPR, precision, f1-score, FNR, MA, AUC <cit.>, TPR @ fixed (low) FPR <cit.>, Threshold at maximum MA) (see Section <ref>) .
Based on the evaluation framework (Section <ref>),
we also present how to use the evaluation framework (Section <ref>) to fairly and effectively evaluate the real performance of an MI attack.
There are four steps before using the evaluation framework. Step 1: Construct the distance distribution of data samples in the target dataset (CV1) (Section <ref>). For a target dataset, we first input all data samples (e.g., n) in the target dataset to the target model and get the corresponding output probabilities, and we classify these data into two categories according to their output probabilities (such as the data samples with high output probability and the data samples with low output probability), and the number of the two categories is the same (e.g., n/2). We extract the same number of data samples (e.g., η) from these two kinds of target data samples, and calculate the MMD <cit.> of these two categories target data samples (e.g., 2η). This calculation iterates until the MMDs are calculated for all samples in the target dataset, and we calculate a total of n/2η groups of MMDs. According to the calculated n/2η groups of MMDs of the target dataset, we rank these n/2η groups MMDs and construct the distance distributions of data samples in the target dataset obey normal, uniform and bernoulli distributions, respectively. Step 2: Calculate the distance between data samples of the target dataset (CV2) (Section <ref>). For target datsests whose distance distribution of data samples obeying different distance distributions, we calculate the distance between data samples of the target dataset by the MMD method. Step 3: Calculate the differential distance between two datasets (CV3) (Section <ref>). Next, we generate a dataset with nonmembers via transforming existing samples into new samples or directly using
the test data <cit.>, and then
we compute the differential distance between two datasets (i.e., the target dataset and a generated dataset with only nonmembers) before and after a data example (i.e., a data example with high output probability and a data example with low output probability) is moved from the target dataset to the nonmember dataset.
Step 4: Set the ratios of the samples that are
made no inferences by an MI attack (CV4) (Section <ref>). Finally,
we set different ratios of the samples that are
made no inferences by an MI attack for image datasets (i.e., 20%, 40%, 45% and 49%) and text dagtasets (i.e., 2%, 4%, 10% and 12%) as the prior Privacy Risk Scores-based <cit.> and the Shapley Values-based MI attacks <cit.>.
After achieving these four steps, we use the four calculated CVs to build the evaluation scenarios (ESs), and each ES consists of four CVs, and we
study a group level evaluation, where the concept of group is that a group consists of multiple ESs, only one CV changes in a group and the other three CVs remain unchanged and the values of other three CVs must correspond to the same
(see Section <ref>), and we make comprehensive evaluations of the existing MI attacks through the 10 typical evaluation metrics (Section <ref>). In this work, we have identified three principles for the proposed “comparing different MI attacks” methodology, and we have designed and implemented
the MIBench benchmark with 84 ESs for each dataset. In total, we have used our benchmark to fairly and systematically
compare 15 state-of-the-art MI attack algorithms (shown in Table <ref>)
across 588 ESs, and these ESs cover 7 widely used datasets and 7 representative types of models.
§ PRINCIPLES FOR BETTER EXPLAINING THE ATTACK COMPARATIVE EVALUATION RESULTS
§.§ Principle 1: The Two Primary Factors are Necessary but Insufficient
The two primary factors, i.e., the kind of data
in the target dataset and the type of the target model, are
necessary but insufficient to let a “comparing different MI attacks” methodology have desirable ability to explain the comparative evaluation results.
In order to incorporate these two factors in evaluating attack effectiveness, all the existing works use multiple target datasets and multiple types (e.g., MLP, ResNet, DenseNet) of target models. However,
we find that even if the
employed metrics are appropriate, the comparison results can
still suffer from poor explanations. For example, researchers
have already noticed the following: (a1) One MI attack algorithm
could suffer from fairly different evaluation results (i.e.,
metric measurements) against different target datasets: it is
usually difficult to explain the lack of coherence cross datasets.
In many cases, although the different target datasets have different
data semantics (e.g., CIFAR100 holds 100 kind of data, while
CH-MNIST holds 8 kind of data), strong correlation between
the data semantics and the evaluation results is hard to identify. (a2) MI attack A is more effective than attack B against target
dataset D1, but attack B is more effective than attack A against
target dataset D2. In such situations, it is usually very difficult
to draw general conclusions on when attack A would be more
effective and when attack B would be more effective.
For example, the Top2+True <cit.> attack (e.g., MA=43.71%) is more effective than the NN_attack<cit.> (e.g., MA=34.51%) against CIFAR10, whereas the NN_attack<cit.> (e.g., MA=0.008%) is more effective than Top2+True <cit.> attack (e.g., MA=0.003%) against ImageNet. (a3) Many MI attacks suffer from high FPR (see Figure <ref>) and low precision (see Figure <ref>) against widely-used
test datasets (e.g., NN_attack<cit.> (51.7%), Label-only <cit.> (50.5%) and BlindMI-Diff <cit.> (50.01%)). However, it is usually difficult to explain why higher precision is not achieved.
§.§ Principle 2: Distance Distribution of Data Samples in the Target Dataset Matters
Incorporating the following factor could substantially
enhance the ability of a “comparing different MI attacks”
methodology to explain the comparative evaluation results: (factor 1) the effectiveness of an MI attack could be highly sensitive
to the distance distribution of data samples in the target dataset.
From Table <ref>,
we discover that the evaluation results of an MI attack are different when distance distributions of data samples in the target dataset obey different distance distributions (e.g., the MAs and the TPR @ 0.01% FPR of the Positive Predictive Value (PPV) attack <cit.> are 3.13%, 6.75%, 1.63% and 0.63%, 0.88%, 0.50% when the distance distribution of data samples in the target dataset obeys normal, uniform, and bernoulli distributions, respectively.)
§.§ Principle 3: Distance between Data Samples of the Target Dataset Matters
Incorporating the following factor could substantially
enhance the ability of a “comparing different MI attacks" methodology to explain the comparative evaluation results: (factor 2) the effectiveness of an
MI attack could be highly sensitive to the distances between data samples of the target dataset.
From Figure <ref>,
we discover that
the evaluation results of an MI attack are different when target datasets have different distances between data samples (e.g., the MAs of the NN attack <cit.> are 28.89%, 32.98% and 39.39% when the distance between data samples of the target dataset are 2.510, 3.813, 4.025, respectively).
§.§ Principle 4: Differential Distance between Two Datasets Matters
Incorporating the following factor could substantially
enhance the ability of a “comparing different MI attacks”
methodology to explain the comparative evaluation results: (factor 3) the
effectiveness of an MI attack could be highly sensitive to the differential
distance (i.e., MMD <cit.> ) before and after a data example is moved from the target dataset
to a generated dataset with only nonmembers.
From Figure <ref>, we find that the evaluation results of an MI attack are different when target datasets have different differential distances between two datasets (e.g., the MAs of the Calibrated Score attack <cit.> are 46.73%, 39.82% and 43.45% when the differential distances between two datasets are 0.085, 0.119, 0.157, respectively).
§ EVALUATION FRAMEWORK
In this section,
we first introduce the experimental setup of our benchmark. Then, we propose the composition of the evaluation framework.
§.§ Experimental Setup
Implementation. In this paper, we utilize the Python 3.7 to achieve our evaluations. And nonmembers utilized in our experiments is the same as <cit.>.
Models. We adopt the same model architectures and hyperparameters as prior BlindMI-Diff attack <cit.>, which are defined in table <ref>. We also adopt a multilayer perceptron (MLP) model that has at most seven dense layers (e.g., 8192, 4096, 2048, 1024, 512, 256 and 128), and an additional Softmax layer. The standard CNN architectures and hyperparameters are the same as the prior black MI attack <cit.>. We also utilize the three kinds of popular DNNs (e.g., VGG, ResNet and DenseNet), which are standard architectures with pre-trained parameters from ImageNet.
Target Model. Given a dataset, we randomly select a target model from the target model column of Table <ref>, and train the target model with the specified hyperparameters.
Shadow Model. For
black-box attacks, we randomly select a shadow model architecture from the shadow model column of Table <ref>. For the gray/white-box settings, we select the same architectures as the given target model as our shadow models' architectures.
Datasets. We utilize seven datasets (as shown in Table <ref>) to perform the comprehensive evaluations of the existing MI attacks. We select half members and half nonmembers in each target dataset, and the details are shown in the Appendix <ref>.
§.§ The Composition of the Evaluation Framework
The primary design goal of our benchmark, called MIBench, is
to meet all of above-mentioned 4 requirements (see Section <ref>). Our benchmark consists not only the evaluation metrics (Section <ref>), but also the evaluation scenarios(Section <ref>).
We design the evaluation scenarios
from the four perspectives mentioned in Section <ref>,
and the ratio of the samples that are made no inferences by an MI attack.
Moreover, each perspective corresponds to a control variable and so
there are in total four control variables in our evaluation scenarios.
The evaluation metric module consists of ten typical evaluation metrics.
§.§.§ Part I: Evaluation Scenarios
Since fine-grained attacks
make no inferences on certain data examples, it seems unfair
to use the same number of data examples when comparing
indiscriminate attack algorithms and fine-grained attack algorithms (shown in Figure <ref>).
The inaccurate and unfair evaluation results shown in Figure <ref>,
Principle 2 (Section <ref>),
Principle 3 (Section <ref>) and
Principle 4 (Section <ref>) indicate that a good evaluation framework should have at least 4 control variables, consisting of
the distance distribution of data samples in the target dataset (CV1), the distance between data samples of the target dataset (CV2), the differential distance between two
datasets (CV3), and the ratio of the samples that are made
no inferences by an MI attack (CV4).
The four control variables adopted in our evaluation scenarios (as shown in Figure <ref>) are as follows: Control Variable 1: {C100_N, C100_U, C100_B; C10_N, C10_U, C10_B; CH_N, CH_U, CH_B; I_N, I_U, I_B, L30_N, L30_U, L30_B; P100_N, P100_U, P100_B; T100_N, T100_U, T100_B}; Control Variable 2: {1.225, 1.24, …, 6.250}; Control Variable 3: {0.001, 0.002, …, 0.500}; Control Variable 4: {2%, 4%, 10%, 12%, 20%, 40%, 45%, 49%}. Based on the four control variables, we build 84 evaluation scenarios for each dataset, and the 84 evaluation scenarios of the CIFAR100 are shown in Table <ref>.
§.§.§ Part II: Evaluation Metrics
We mainly use attacker-side accuracy, precision, recall, f1-score, false positive rate (FPR), false negative rate (FNR), membership advantage (MA), the
Area Under the Curve (AUC) of attack Receiver
Operating Characteristic (ROC) curve <cit.>, TPR @ fixed (low) FPR <cit.>, threshold at maximum MA, as our evaluation metrics (as shown in Figure <ref>). Specifically, accuracy is the percentage of data samples with correct membership predictions by MI attacks; precision represents the ratio of real-true members predicted among all the positive membership predictions made by an adversary; recall demonstrates the ratio of true members predicted by an adversary among all the real-true members; f1-score is the harmonic mean of precision and recall; FPR is the ratio of nonmembers are erroneously predicted as members; FNR is the difference of the 1 and recall (e.g., FNR=1-recall); MA is the difference between the true positive rate and the false positive rate (e.g., MA = TPR - FPR <cit.>); AUC is computed as the
Area Under the Curve of attack ROC in logarithmic scale; TPR @ fixed (low) FPR is an attack’s true-positive rate at (fixed) low false-positive rates in logarithmic scale; and the threshold at maximum MA is a threshold to achieve maximum MA.
§ USING MIBENCH TO COMPARE DIFFERENT MI ATTACKS
The high-level idea for using the proposed benchmark is the controlled variable approach, where only one control variable (CV) is allowed to change at a time and the other three CVs are left unchanged, and then the evaluation results of the different MI attacks in
this case are tested. Our evaluation aims to answer the following Research Questions (RQs). RQ1: Whether target datasets obeying different distance distributions of data samples have notable effects on the (effectiveness) ranking of state-of-the-art MI attacks when other three CVs keep the same values? RQ2: Whether target datasets that have different distances between data samples have notable effects on the (effectiveness) ranking? RQ3: Whether target datasets that have different differential distances between two datasets have notable effects on the (effectiveness) ranking? RQ4: Whether target datasets that have different ratios of the
samples that are made no inferences by an MI attack have notable effects on the (effectiveness) ranking?
In our experiments, the target dataset sizes
are shown in Table <ref>. We first divide the target dataset into small batches of 20 data samples as the prior BlindMI-Diff attack <cit.>, then we calculate the MMD of the output probabilities through target models by Equation <ref>. Next, we construct and calculate 4 CVs as described in Section <ref>. Finally, we use our benchmark to fairly and systematically compare 15 state-of-the-art MI attack algorithms (see Table <ref>) across 588 evaluation scenarios (84 evaluation scenarios for each dataset (see Table <ref>)), and study how every CV affects the evaluation results.
Each attack is performed ten times with a new target and shadow model with different training datasets, model architectures and hyperparameters each time, and we obtain the average values of 10 evaluation metrics
together with the standard error of the mean among the thirteen attacks. We perform experiments on CIFAR100, CIFAR10, CH_MNIST and ImageNet with seven model architectures described in Section <ref>, and for Location30, Purchase100 and Texas100, we use fully connected neural networks with 4 hidden layers, and the numbers of neurons for hidden layers are 1024, 512, 256, and 128, respectively. We use Tanh as the activation function on Purchase100 and Texas100 datasets, and ReLU on the Location30. For all MI attacks, we evaluate them with metrics in Section <ref>, and for the fine-grained MI attacks based on thresholds (e.g., Shapley Values<cit.> and Risk Scores<cit.>), we report different thresholds for different classes at maximum MA (e.g., 100 thresholds on different classes on CIFAR100) and other MI attacks that use a single
threshold for all classes (e.g., Loss-Threshold<cit.>, Top1-Threshold<cit.>, PPV<cit.>, Calibrated Score<cit.>, Distillation-based Thre.<cit.> and LiRA<cit.>), we report the thresholds at maximum MA. All evaluation results of MIBench are shown in the <https://github.com/MIBench/MIBench.github.io/blob/main/README.md>.
Because of the (simplified) 84 evaluation scenarios, for each dataset, we select three distances among all distances between data samples under different distance distributions and three distances among all differential distance between two datasets, recectively. And the distances are the mean distance of the distance between data samples (or the differential distance between two datasets) and the two distances equally divided on the the left and right sides, respectively. For example, the three distances between data samples (differential distances between two datasets) selected on the CIFAR100 are 2.893, 3.813, 4.325 (0.085, 0.119, 0.157), respectively. (see Table <ref> in the Appendix C).
§.§ RQ1: Effect of the Distance Distribution of Data Samples in the Target Dataset
Methodology. For a target dataset, we first input all data samples (e.g., n) in the target dataset to the target model and get the corresponding output probabilities, and we classify these data into two categories according to their output probabilities (such as the data samples with high output probability and the data samples with low output probability), and the number of the two categories is the same (e.g., n/2). We extract the same number of data samples (e.g., η) from these two kinds of target data samples, and utilize Equation <ref> to calculate the MMD <cit.> of these two categories target data samples' output probabilities through target models (e.g., 2η).
MMD[𝒫_p,𝒫_q] = 1/n_p∑_i=1^n_pϕ(𝒫_i) - 1/n_q∑_j=1^n_qϕ(𝒫_j) _v
where 𝒫_i∈𝒫_p and 𝒫_j∈𝒫_q, p and q are data samples with high output probability and data samples with low output probability of the target dataset, respectively. And n_p and n_q are the size of 𝒫_p and 𝒫_q, respectively. ϕ(.) is a feature space map from the probability of the target model to the Reproducing Kernel Hilbert Space (RKHS)<cit.>, namely k↦ν. And most commonly used is a Gaussian kernel function, namely k(𝒫_p,𝒫_q) = ⟨ϕ(𝒫_p),ϕ(𝒫_q)⟩ = exp(-𝒫_p - 𝒫_q/(2σ^2)).
This calculation of MMD is performed until the MMDs are calculated for all samples in the target dataset, and we calculate a total of n/2η groups of MMDs. Next, according to the calculated n/2η groups of MMDs of the target dataset, we first sort these n/2η groups MMDs in ascending order,
namely ℳℳ𝒟 = {ℳℳ𝒟_1, …, ℳℳ𝒟_n/2η}. Then, we construct the distance distributions of data samples in the target dataset as normal, uniform and bernoulli distributions, respectively. (a) Normal Distribution. We select some MMDs from the calculated n/2η groups of MMDs of the target dataset to construct a set of MMDs, {ℳℳ𝒟_1, …, ℳℳ𝒟_h} (1≤ h ≤ n/2η), which obeys a normal distribution, that is, {ℳℳ𝒟_1, …, ℳℳ𝒟_h}∼ N(μ, σ^2), where μ and σ are the mean and variance of the normal distribution, respectively. (b) Uniform Distribution. We assume two parameters a and b, and the ℳℳ𝒟_1≤a≤b≤ℳℳ𝒟_n/2η. We select ℳℳ𝒟_i from the calculated n/2η groups of MMDs of the target dataset, and a ≤ℳℳ𝒟_i≤ b and divide the selected MMDs, {ℳℳ𝒟_1, …, ℳℳ𝒟_s} (1≤ s ≤ n/2η), in the distance intervals [a,b] equally into several parts (e.g., γ) and the number of MMD distance groups in each MMD distance part is equal (e.g., s/γ). Therefore, the selected MMDs obeys a uniform distribution of parameters s/γ on [a,b]. (c) Bernoulli Distribution. We select a distance threshold (e.g., ϵ) in the calculated n/2η groups of MMDs of the target dataset as the split point of the Bernoulli distribution, and if the selected ℳℳ𝒟_i is smaller than the selected distance threshold ϵ, we regard the incident as a success, otherwise a failure. Specially, we select some MMDs, {ℳℳ𝒟_1, …, ℳℳ𝒟_r} (1≤ r ≤ n/2η) from the calculated n/2η groups of MMDs of the target dataset, and construct these selected MMDs (e.g., {ℳℳ𝒟_1, …, ℳℳ𝒟_r}) obey Bernoulli distribution with a parameter p, where the ratio of MMDs greater than the distance threshold ϵ is p and the ratio of MMDs less than ϵ is (1-p).
In our experiments, for a target dataset on each dataset (see 𝒟_t1 in Table <ref>), we first input all data samples (e.g., 20,000 on CIFAR100) in the target dataset to the target model and get the corresponding output probabilities, and we classify these data into two categories according to their output probabilities (such as the data samples with high output probability and the data samples with low output probability), and the number of the two categories is the same (e.g., 10,000). We extract the same number of data samples (e.g., 10), as the prior BlindMI-Diff attack <cit.>, from these two kinds of target data samples, and combine them into small batches of 20
data samples and utilize Equation <ref> to calculate the MMD <cit.> of them. And we in total get 1000 MMDs, and original distance distribution of the 1000 MMDs on CIFAR100 is shown as Figure <ref>, and from the Figure <ref> we find that the original distance distribution of the 1000 MMDs on CIFAR100 is approximately normal distribution. When the original distance is within a certain range, the original distance distribution of the MMDs on CIFAR100 dataset is approximately uniform distribution (see Figure <ref>). We select different distance thresholds of the 1000 MMDs on target CIFAR100 dataset, and find that the distance of the 1000 MMDs on CIFAR100 dataset is Bernoulli distribution shown in Table <ref>. We find the similar phenomenon in other six target datasets, and the results are shown in the Appendix B.
Because of the fair evaluation of the attack results under these three distributions, we chose an equal number of samples to calculate the MMDs (see 𝒟_t2-𝒟_t4 in Table <ref>), and we select equal number of MMDs (e.g., 100 MMDs on CIFAR100) to construct the distance distribution of data samples in target datasets as normal, uniform and bernoulli distributions, respectively (see Figure <ref> in Appendix B).
Results. We verify the evaluation results
of 15 state-of-the-art MI attacks when only the distance distribution of data samples in the target dataset (CV1) is allowed to change and other three CVs
(see Section <ref>) are left unchanged. From Table <ref> and Figure <ref>, we discover that the evaluation results of an MI attack are different when target datasets obey different distance distributions.
We observe that the ranking results of a particular MI attack (i.e. Loss-Threshold <cit.> (Class 2.1 see Figure <ref>)) remain unchanged across almost all the ESs in a particular group (i.e., when only the CV1 changed and other three CVs remained the same, the Loss threshold based attack <cit.> is 85.17% of the top three most effective attacks). This indicates that the corresponding tuning variable probably does not affect the MI attack. To confirm this conclusion, we look into the algorithm of the MI attack and find the following: the Loss threshold-based attack <cit.> required a shadow model and computed the average cross-entropy
loss of all training samples in the shadow model to distinguish members. Therefore, when only the CV1 changes and other three CVs remain the same, it does not affect the average cross-entropy
loss of the Loss threshold based attack <cit.>, and thus has little effect on evaluation results.
We find that the Top1-Threshold based attack <cit.> (Class 2.1 see Figure <ref>)) is most affected when only the CV1 changes, other three CVs remain the same, for example, for one particular ES01 on Purchase100 could result in the following ranking:
Class 2.1 (except for the Label-only, Loss Threshold attacks) Class 3.1.2 Class 2.2.1 (except for the Shapley values attack) Class 1.1.2 Class 1.1.1 (except for the Top3_NN attack) Class 2.2.2 (except for the Calibrated Score, LiRA and PPV attacks) Class 3.1.1 Class 3.1.3,
and for another particular ES29 on Purchase100 could result in the following ranking:
Class 3.1.2 Class 1.1.2 Class 2.2.1 (except for the Shapley values attack) Class 1.1.1 (except for the Top3_NN attack) Class 2.1 (except for the Loss Threshold attack) Class 2.2.2 (except for the LiRA and PPV attacks) Class 3.1.1 Class 3.1.3.
The reason is that the Top1-Threshold based attack <cit.> needs some extra nonmembers to get the nonmember threshold to infer members, and the evaluation results of Top1_Threshold attacks depends on the choice of the nonmembers, and when only CV1 changes and the nonmembers remain unchanged, may cause the selected nonmembers to be unable to effectively distinguish between members and nonmembers of different distance distributions of data samples in the target dataset.
We also observe that the ranking order between two particular MI attacks (i.e. Top3-NN attack<cit.> and NN_attack<cit.> (Class 1.1.1)) remain unchanged across almost all the ESs in a particular group. This indicates that the ranking order is independent of the control variable. In addition, this indicates that the ranking order is to certain extent determined by the three non-tuning factors. We look into the values of the three non-tuning factors, and we find that the ranking order between Top3-NN attacks and NN_attacks is Top3_NN NN_attack when only the CV1 changes and other three CVs remain the same, while the ranking order of them is NN_attack Top3_NN when only the CV2 changes and the other three variables remain the same. The reason is that Top3-NN attacks and NN_attacks are mainly based on the output probability of the target model, so it hardly affects the ranking order of them when only the CV1 changes, which does not effect the output probability of the target model; and changing only the CV2 will change the distance between the data samples with high output probability and data samples with low output probability of the target dataset to some extent, this will affect the attack decision, and finally make the ranking order of the two attacks change. Meanwhile,
from Figure <ref>,
We find that the
FNR is almost the same when the target datasets obeying different distance distributions of data samples in the target dataset.
[Observation] The distance distributions of data samples in the target dataset have little effect on the
FNR.
§.§ RQ2: Effect of Distance between Data Samples of the Target Dataset
Methodology. For target datsests obeying different distance distributions, we first use the Equation <ref> to calculate the distance between data samples (e.g., data samples with high output probability and data samples with low output probability classified in Section <ref>) of the target dataset. Then,
we verify the evaluation results
of 15 state-of-the-art MI attacks when only the distance between data samples (e.g., data samples with high output probability and data samples with low output probability) (CV2) of the target dataset is allowed to change and other three CVs
(see Section <ref>) are left unchanged.
Results. From Figure <ref>,
we discover that the MA of the MI attacks increases with the increase of the distance between data samples of the target dataset.
Meanwhile, Table <ref> also shows the similar results that the distance between data samples on ImageNet, CH_MNIST, CIFAR10 and CIFAR100 increases in turn, and the MA increases gradually. We observe that the ranking results of a particular MI attack (i.e. Top2+True attack<cit.> (Class 1.1.2 see Figure <ref>)) remain unchanged across almost all the ESs in a particular group (i.e., when only the CV2 changed and other three CVs remained the same, the Top2+True attack<cit.> is 70.83% of the top three to top five most effective attacks). This indicates that the corresponding tuning variable probably does not affect the MI attack. To confirm this conclusion, we look into the algorithm of the MI attack and find the following: Top2+True attacks<cit.> mainly combined the top two output probabilities of the target model and the ground-truth labels. Therefore, when only the CV2 changes and other three CVs remain the same, it does not affect the ground-truth labels of the target dataset, and thus has little effect on evaluation results of the Top2+True attacks<cit.>.
We find that the Top3_NN attack<cit.> (Class 1.1.1 see Figure <ref>) is most affected when only CV2 changes, other three CVs remain the same, for example, for one particular evaluation scenario ES14 on CH_MNIST could result in the following ranking:
Class 2.1 (except for the Label-only and Loss-Threshold attacks) Class 2.2.1 (except for the Shapley values attack) Class 1.1.1 (except for the NN attack ) Class 3.1.1 Class 3.1.2 Class 2.2.2 (except for the Distillation-based, PPV, LiRA attacks) Class 1.1.2 Class 3.1.3,
and for another particular evaluation scenario ES24 on CH_MNIST could result in the following ranking:
Class 2.1 (except for the Label-only and Loss-Threshold attacks) Class 2.2.1 (except for the Shapley values) Class 3.1.1 Class 3.1.2 Class 2.2.2 (except for the Distillation-based, PPV, LiRA attacks) Class 1.1.1 (except for the NN attack) Class 1.1.2 Class 3.1.3. The reason is that the Top3_NN attack<cit.> mainly is based on the Top three output probability of the target model, and when only the CV2 changes, the difference between members and nonmembers could not be captured effectively by the Top3_NN attack, which will affect evaluation results.
We also observe that the ranking order between two particular MI attacks (i.e. LiRA attack<cit.> and PPV attack <cit.> (Class 2.2.2 see Figure <ref>)) remain unchanged across almost all the ESs in a particular group. This indicates that the ranking order is independent of the control variable. In addition, this indicates that the ranking order is to certain extent determined by the three non-tuning factors. We look into the values of the three non-tuning factors, and we find that the ranking order between LiRA attacks and PPV attacks is PPV LiRA when only the CV2 changes and other three CVs remain the same, while the ranking order of them is LiRA PPV when only CV4 changes and other three variables remain the same. The reason is that PPV attacks mainly considered the prior distribution of the data, and they classified a target data example as a member when its per-instance loss was larger than a decision threshold, and the LiRA attacks combined per-example difficulty scores with a principled and well-calibrated Gaussian likelihood estimate, and they utilized a true-positive rate at low FPR
metric to help determine the value of the threshold. Therefore, it hardly affects the ranking order of them when only the CV2 changes, which does not effect the prior distribution of the training data of the target model and does not effect the differences of the members and nonmembers; and changing only the CV4 will change the ratio of the samples of the target dataset, and the removed samples that are made no inferences by an MI attack may have high difficulty scores, this will increase the evaluation results of the LiRA attacks, and finally make the ranking order of the two attacks change. Moreover, from Figure <ref>,
we find that
the FNR is almost the same with the increase in sample distances.
[Observation] The lager the distance between data samples
of the target dataset, the higher the MA.
[Observation] The distance between data samples of the target dataset have little effect on the
FNR.
§.§ RQ3: Effect of Differential Distances between Two Datasets
Methodology. After constructing distance distributions of data samples and calculating the distance between data samples of the target dataset, we first generate a dataset with nonmembers via transforming existing samples into new samples or directly using the test data <cit.>, and then
we compute the differential distance between two datasets before and after a data sample with high output probability (a data sample with low output probability) (classified in Section <ref>) is moved from the target dataset to a nonmember dataset. And we get the average differential distance of data samples with high output probability and the average differential distance of data samples with low output probability between two datasets. Finally,
we verify the evaluation results
of 15 state-of-the-art MI attacks when only the differential distance between two datasets (e.g., the target dataset and the nonmember dataset) (CV3) is allowed to change and other three CVs
(see Section <ref>) are left unchanged. Moreover, we design a distance interval threshold δ and compare the evaluation results when the distance between data samples of the target dataset and the differential distance between two datasets
are changed to the same distance threshold (e.g., δ), respectively.
Results. From Figure <ref>
we find that the MA increases with the differential distance between two datasets (e.g., the target dataset and the nonmember dataset) increases.
[Observation] The more the differential difference between two datasets, the higher the attacker-side MA.
From Figure <ref>, we find that the effect of the distance between data samples of the target dataset on MA almost is 0, whereas that of the differential difference between two datasets is up to 30%, when the distance between data samples of the target dataset and the differential difference between two datasets are changed to the same distance threshold (e.g., 0.1).
Moreover,
from Table <ref>, we find that
the change of average distances between data samples of the target dataset (e.g., 0.113) on CIFAR10 are larger than that of the average differential differences between two datasets (e.g., 0.082) when the MA is changed to the same threshold (e.g., 10%).
From Figure <ref>,
we find that the FNR is almost zero when the differential distance between two datasets changes.
[Observation] The effect of the differential distance between two datasets on MA is larger than that of the distance between data samples of the target dataset.
[Observation] The differential distance between two datasets have little effect on the
FNR.
We observe that the ranking results of a particular MI attack (i.e. Label-only<cit.> (Class 2.1 see Figure <ref>)) remain unchanged across almost all the ESs in a particular group (i.e., when only the CV3 changed and other three CVs remained the same, the Label-only<cit.> is 70.88% of the top three to top seven most effective attacks). This indicates that the corresponding tuning variable probably does not affect the MI attack. To confirm this conclusion, we look into the algorithm of the MI attack and find the following: Label-only<cit.> mainly classified a sample as a member if the predicted class was
the same as its ground-truth label. Therefore, when only the CV3 changes and other three CVs remain the same, it does not affect the ground-truth labels of the target dataset, and thus has little effect on the Label-only<cit.> attack.
We find that the BlindMI-Diff-w<cit.> (Class 3.1.1 see Figure <ref>) is most affected when only the CV3 changes, other three CVs remain the same, for example, for one particular evaluation scenario ES03 on ImageNet could result in the following ranking: Class 2.1 (except for the Label-only and Top1_Threshold attacks) Class 2.2.1 Class 3.1.1 Class 1.1.1 (except for the NN attack) Class 2.2.2 (except for the Calibrated Score, LiRA, PPV attacks) Class 3.1.3 Class 3.1.2 Class 1.1.2,
and for another particular evaluation scenario ES05 on ImageNet could result in the following ranking: Class 2.1 (except for the Label-only and Top1_Threshold attacks) Class 2.2.1 Class 1.1.1 (except for the Top3 NN attack) Class 2.2.2 (except for the Calibrated Score, LiRA attack) Class 3.1.3 Class 3.1.2 Class 3.1.1 Class 1.1.2.
The reason is that the BlindMI-Diff-w attack <cit.> mainly utilized the differences between
two datasets before and after a sample was moved, and when only the CV3 changes, the differential distance between two datasets will change, which will affect evaluation results.
We also observe that the ranking order between two particular MI attacks (i.e. Label-only (Class 2.1 see Figure <ref>) and NN_attack (Class 1.1.1 see Figure <ref>)) remain unchanged across almost all the ESs in a particular group. This indicates that the ranking order is independent of the control variable. In addition, this indicates that the ranking order is to certain extent determined by the three non-tuning factors. We look into the values of the three non-tuning factors, and we find that the ranking order between Label-only and NN_attacks is NN_attack Label-only when only the CV3 changes and other three CVs remain the same, while the ranking order of them is Label-only NN_attack when only the CV1 changes and other three variables remain the same. The reason is that it hardly affects the ranking order of them when only the CV3 changes, which does not effect the ground-truth labels and output probabilities of the target dataset; and changing only the CV1 may cause NN_attacks to be unable to effectively distinguish between members and nonmembers of different distance distributions of data samples in the target dataset, and finally make the ranking order of the two attacks change.
§.§ RQ4: Effect of the Ratios of the samples that are made
no inferences by an MI attack
Methodology. For each distance distribution of data samples, the distance between data samples and the differential distance between two datasets of the target dataset,
we first set different ratios of the samples that are made no inferences by an MI attack for image (e.g., 20%, 40%, 45% and 49%) and text datasets (e.g., 2%, 4%, 10% and 12%) as the prior Privacy Risk Scores-based <cit.> and the Shapley Values-based MI attacks <cit.>.
Then,
we verify the evaluation results
of 15 state-of-the-art MI attacks when only the ratio of the samples that are made
no inferences by an MI attack (CV4) is allowed to change and other three CVs
(see Section <ref>) are left unchanged.
Results. From Figure <ref>, we find that the evaluations are different when the ratios are different. We observe that the ranking results of a particular MI attack (i.e. NN_attack<cit.> (Class 1.1.1 see Figure <ref>)) remain unchanged across almost all the ESs in a particular group (i.e., when only the CV4 changed and other three CVs remained the same, the NN_attack<cit.> is 88.95% of the top five to top nine most effective attacks). This indicates that the corresponding tuning variable probably does not affect the MI attack. To confirm this conclusion, we look into the algorithm of the MI attack and find the following: NN_attack<cit.> was mainly based on the output probability of the target
model. Therefore, when only the CV4 changes and other three CVs remain the same, it does not affect the output probability of data samples in the target dataset, and thus has little effect on the NN_attack<cit.>.
We find that the Shapley Values<cit.> attack (Class 2.2.1 see Figure <ref>) is most affected when only the CV4 changes, other three CVs remain the same, for example, for one particular ES49 on ImageNet could result in the following ranking: Class 2.1 (except for the Label-only and Top1_Threshold) Class 3.1.1 Class 1.1.1 (except for the Top3_NN attack) Class 2.2.1 (except for the Risk score attack) Class 2.2.2 (except for the LiRA, Calibrated Score attacks) Class 3.1.3 Class 3.1.2 Class 1.1.2,
and for another particular ES50 on ImageNet could result in the following ranking: Class 2.1 (except for the Label-only and Top1_Threshold attacks) Class 2.2.1 Class 1.1.1 (except for the NN attack) Class 2.2.2 (except for the PPV, Calibrated Score attacks) Class 3.1.3 Class 3.1.2 Class 3.1.1 Class 1.1.2.
The reason is that the Shapley Values<cit.> attack mainly set meaningful thresholds for SHAPr scores to decide whether a data record is susceptible to MI attacks, and when only the CV4 changes, the Shapley Values<cit.> attack will select the data samples that have high shapley values to infer their membership, which will affect evaluation results.
We observe that the ranking results of a particular MI attack (i.e. BlindMI-Diff-w<cit.> (Class 3.1.1 see Figure <ref>)) remain unchanged across almost all the ESs in Group A (e.g., ES49: C100_U+4.325+0.085+20%, ES50:C100_U + 4.325+0.085+40% and ES51:C100_U + 4.325+0.085+45%). We also observe that the ranking results of the same MI attack remain unchanged across almost all the ESs in Group B (e.g., ES55:C100_U+
4.325+0.157+45%, ES56:C100_
U+4.325+0.157+49%). Note that Group A and Group B use the same tuning variable such as the CV4. However, the ranking results are different between the two groups. This indicates that the MI attack is probably affected by the differences between the values of the three non-tuning factors in the two groups. We look into the differences and find that the differences between the values of the three non-tuning factors in the two groups are the CV3 (e.g., 0.085 and 0.157) and BlindMI-Diff-w<cit.> mainly utilized
the differences between two datasets before and after a sample was moved. Therefore, the BlindMI-Diff-w attacks are most affected by the CV3.
We also observe that the ranking results of the Shapley Values<cit.> attack (Class 2.2.1 see Figure <ref>) remain unchanged across almost all the ESs in Group A (e.g., ES31:C100_U+2.893+0.085+45%, ES33:C100_U +2.893
+ 0.119+45% and
ES35:C100_U+2.893+0.157+45%) and Group B (e.g., ES34:C100_
U+2.893+0.119+49%,
ES36:C100_U+2.893+0.157+49%). Note that Group A and Group B use the same tuning variable such as the CV3. However, the ranking results are different between the two groups (e.g., the second and the third). We look into the differences and find that the differences between the values of the three non-tuning factors in the two groups are the CV4 (e.g., 45% and 49%) and Shapley Values attacks set meaningful thresholds for SHAPr
scores to decide distinguish members, and when only CV4 changes, the Shapley Values attack will select the data samples that have high shapley values to infer their membership. Therefore, the Shapley Values attacks are most affected by the CV4.
§ RELATED WORK
Membership Inference Attacks. Homer et al. <cit.> first proposed an MI attack on biological data.
Shokri et al. <cit.> proposed the first black-box MI attack against ML.
Huge literature followed these works
to different scenarios (e.g., location data <cit.>,
language models <cit.>, sentence embeddings <cit.>, speech recognition models <cit.>, federated learning <cit.>, transfer learning <cit.>, generative models <cit.>, white box access <cit.>).
Categories of Membership Inference Attacks.
There are main three categories
1) Binary classifier-based MI attacks, which utilize the output predictions of shadow models to train a binary classifier to launch the MI attacks <cit.>. 2) Evaluation metric-based MI attacks, which used the defined evaluation metrics to distinguish members and nonmembers <cit.>. 3) Differential Comparisons-based MI attacks (BlindMI-Diff), which
mainly utilized the differences between two datasets.
Defenses against MI attacks. Multiple defenses <cit.> have been proposed to mitigate MI attacks.
§ CONCLUSION
In this paper,
we seek to develop a comprehensive benchmark for comparing
different MI attacks. The primary design goal of our benchmark, called MIBench, is
to meet all of above-mentioned 4 requirements (see Section <ref>). Our benchmark consists not only the evaluation metrics, but also the evaluation scenarios. And we design the evaluation scenarios
from four perspectives: the distance distribution of data samples in the target dataset, the distance between data samples of the target dataset, the differential distance between two datasets (i.e., the target dataset and a generated dataset with only nonmembers), and the ratio of the samples that are made no inferences by an MI attack. The evaluation metrics consist of ten typical evaluation metrics. We have identified three principles for the proposed “comparing different
MI attacks” methodology, and we have designed and implemented
the MIBench benchmark with 84 evaluation scenarios for each dataset. In total, we have used our benchmark to fairly and systematically
compare 15 state-of-the-art MI attack algorithms across 588 evaluation scenarios, and these evaluation scenarios cover 7 widely used datasets and 7 representative types of models.
IEEEtran
§ A. DATASETS DESCRIPTION
CIFAR100. CIFAR100 is a widely used benchmark dataset to image classification, which consists of 60,000 images in 100 classes. We randomly select two disjoint sets of 10,000 images
as the target models and the shadow models' training datasets, respectively.
CIFAR10. CIFAR10 is a widely used dataset to evaluate the image classification, which consists of 60,000 images in 10 classes. We randomly select two disjoint sets of 10,000 images
as the target models and the shadow models' training datasets, respectively.
CH_MNIST. CH_MNIST is a benchmark dataset of histological images used to evaluate human colorectal cancer, consisting of 5,000 histological images in 8 classes of tissues. We follow the same image processing methods and classification tasks as prior BlindMI-DIFF<cit.> to resize all images to 64×64. We randomly select two disjoint sets of 2,500 images
as the target models and the shadow models' training datasets, respectively.
ImageNet. Tiny-imagenet is a widely used benchmark dataset to image classification, which is a subset of the ImageNet dataset and consists of 100,000 images in 200 classes. We randomly select two disjoint sets of 10,000 images
as the target models and the shadow models' training datasets, respectively.
Location30. Location30
contains location “check-in” records of
individuals. We obtain a
pre-processed
dataset from
<cit.>
which contains 5,010 data samples with with 446 binary features corresponding to whether an individual has visited a particular location.
All data samples are clustered into 30
classes representing different geosocial types. The classifi-
cation task is to predict the geosocial type based on the 466
binary features. Following Jia et al. <cit.>, we use 1,000 data
samples to train a target model.
Purchase100. Purchase100
contains shopping records
of
different individuals. We obtain a
pre-processed dataset from
<cit.>
containing 197,324 data samples with 600 binary features corresponding to a specific product.
All data samples are clustered into 100 classes representing different purchase styles. The classification task is to predict the purchase
style based on the 600 binary features. We follow Nasr et
al. <cit.> to use 10% data samples (19,732) to train a target model.
Texas100. Texas100 consists of Texas Department of State Health Services’ information about patients discharged from public hospitals.
Each data
record contains information about the injury, diagnosis, the
procedures the patient underwent and some demographic details.
We obtain
preprocessed dataset from
<cit.> which contains 100 classes of patient’s procedures consisting 67,330 data
samples with 6,170 binary features.
The classification task is to predict the patient’s main procedure based on the patient’s information.
Following
<cit.>, we use 10,000 data samples to train a target model.
§ B. ORIGINAL AND CONSTRUCTED DISTANCE DISTRIBUTIONS OF DATA SAMPLES IN THE TARGET DATASET
Figure <ref> and Figure <ref> show the original and constructed distance distribution of data samples in the target target seven datasets, respectively.
§ C. THE SELECTED DISTANCES OF THE TARGET DATASET
Table <ref> shows the selected distances between data samples (Dis_Between_Data) and differential distances between two datasets of the target dataset in our experiments.
|
http://arxiv.org/abs/2307.04504v1 | 20230710115604 | An Algorithm with Optimal Dimension-Dependence for Zero-Order Nonsmooth Nonconvex Stochastic Optimization | [
"Guy Kornowski",
"Ohad Shamir"
] | math.OC | [
"math.OC",
"cs.LG"
] |
theoremTheorem
*theorem*Theorem
proposition[theorem]Proposition
*proposition*Proposition
example[theorem]Example
lemma[theorem]Lemma
corollary[theorem]Corollary
definition[theorem]Definition
remark[theorem]Remark
assumption[theorem]Assumption
claim[theorem]Claim
|
http://arxiv.org/abs/2307.04363v1 | 20230710062314 | Diffusion dynamics of star-shaped macromolecules in dilute solutions | [
"Prabeen Kumar Pattnayak",
"Aloke Kumar",
"Gaurav Tomar"
] | cond-mat.soft | [
"cond-mat.soft"
] |
Polymer chains dissolved in a solvent take random conformations due to large internal degrees of freedom and are characterized geometrically by their average shape and size. The diffusive dynamics of such large macromolecules play an indispensable role in a plethora of engineering applications. The influence of the size of the polymer chain on its diffusion is well studied, whereas the same cannot be said for the shape of the polymer chain. In the present work, the influence of shape on the center-of-mass diffusion of the star-shaped chains in solution is investigated using Multi-particle Collision Dynamics. Star-shaped chains of varying degrees of functionality are modeled in a good solvent at infinite dilution. The radius of gyration(R_g) of the star-shaped chains follows a functionality-independent scaling law with the chain length(N), R_g ∼ N^ν, where ν∼ 0.627. The shape of the polymer chains is calibrated by relative shape anisotropy. Highly anisotropic star-shaped polymer chains are found to have a faster rate of diffusion along the translational direction due to a slower rate of rotational diffusion when the radius of gyration of the polymer chains is maintained constant.
§ INTRODUCTION
Polymeric fluids are a unique class of complex fluids that show a plethora of fascinating non-Newtonian behaviours<cit.>, which can be understood with the transport and rheological properties of the fluid. One key challenge of the complex fluid community is understanding how the macroscopic properties of the polymeric fluids arise from microscopic interactions of the macromolecules<cit.>. The polymer chain dissolved in a solvent can take a multitude of conformations due to its large internal degrees of freedom. The average shape and size are used to characterize a polymer chain geometrically: the radius of gyration is widely used to describe the size and the eigenvalues of the gyration tensor for the shape calibration<cit.>. Advances in controlled polymerization have led to synthesizing complex polymer structures like star-shaped, comb-shaped, H-shaped, ring, and many more.<cit.>. The diffusive dynamics of such complex polymer chains are of fundamental interest in the biophysics community and are ubiquitous in numerous engineering applications. Star-shaped polymers are used for the controlled delivery of drugs in biomedical applications<cit.> and as viscosity index modifiers in oil industries<cit.>. Polyethylene glycol stars are used for protein delivery<cit.>. Understanding macromolecular diffusion in a biological cell is important for its various functions, such as the movement of plasmids and transport of amino acids <cit.> <cit.>. Hence, the influence of the shape and size of the polymer chain on its diffusive dynamics in solution is essential.
Most studies on the diffusion of complex polymer chains have been done by keeping the length of the polymer chain constant. Using fluorescence microscopy, Robertson et al.<cit.> have shown a lower radius of gyration and a higher diffusion coefficient for circular DNA molecules than linear ones for the same chain length. Using Brownian dynamics simulations with hydrodynamic interaction, Kanaeda and Deguchi<cit.> have reported higher diffusion coefficients for the ring polymers than for the linear polymers of the same chain length. Hegde et al.<cit.> have reported similar findings for the ring polymers in comparison to the linear chains for the same chain length by using three different simulation techniques. Singh et al.<cit.> have shown, using Multi-particle Collision Dynamics(MPCD), that the center-of-mass diffusion coefficient of the star polymer chains decreases with an increase in their radius of gyration. Hence, when it comes to size, it is clear that the higher the size of the polymer chain, the lower the center-of-mass diffusion coefficient. However, it is difficult to comment on the influence of the shape of the polymer chain on its diffusion from the same polymer chain length study as both shape and size are distinct for different polymer chain architectures<cit.>. The diffusion study of complex polymer chains for the same size case is scanty. Hegde et al.<cit.> have reported a higher diffusion coefficient for the linear chains than the ring chains for the same radius of gyration using Molecular Dynamics, MPCD, and the Lattice Boltzmann method and also noted that size could not be the only factor that influences the diffusion of the chain. Therefore, the effect of the shape parameter on the center-of-mass diffusion of the polymer chains still remains an open question.
In this work, the effect of the shape parameter on the center-of-mass diffusion of the star-shaped polymer chains in solution is studied in the limit of infinite dilution using a mesoscopic coarse-grained simulation method, namely MPCD. For simulating the Brownian motion of the complex polymer chains in a solution, MPCD is frequently used as it incorporates both thermal fluctuation and long-range hydrodynamic interactions<cit.>. At first, the shape and size of star-shaped polymer chains with different functionality are analyzed using the gyration tensor and compared with linear polymer chains at the same chain length. Subsequently, the translational diffusion of six different types of chains (one linear and five star-shaped chains) with the same radius of gyration is studied using the center-of-mass mean square displacement, followed by their rotational diffusion using the reorientation correlation function. Finally, the diffusion study is correlated to the shape characterization study in order to find the effect of the shape parameter on the center-of-mass diffusion of the star-shaped polymer chains in a solution.
§ NUMERICAL FORMULATION
The coarse-grained bead-spring model represents the polymer chains dissolved in the solution. To replicate good solvent conditions, the excluded volume interactions between the monomer beads are modeled using the repulsive part of the 12-6 Lennard-Jones (LJ) potential, also known as Weeks-Chandler-Andersen potential<cit.> (U_WCA), defined as:
U_WCA(r) =
4ε[ ( σ_p/r)^12 - ( σ_p/r)^6 ] + ε r≤ 2^1/6σ_p
0 otherwise
where σ_p is the diameter of a monomer bead, r is the distance between two beads and ε = k_BT is the strength of interaction, k_B is the Boltzmann’s constant and T is temperature. The neighboring monomers of the polymer chain are connected with springs, the potential of which is given by Finitely Extensible Nonlinear Elastic (FENE)<cit.>, defined as:
U_FENE(r) =
-1/2 k r_0^2 ln[ 1 - (r/r_0)^2 ] r≤ r_0
∞ otherwise
where k is the spring constant, and r_0 is the maximum length of the extension. The values of κ and r_0 are 30 k_BT/σ_p^2 and 1.5 σ_p respectively, as recommended by Kremer and Grest<cit.>. The bead spring model, FENE potential, and Kreme and Grest parameters are widely used in the coarse-grained modeling of the polymer chains<cit.>. The star-shaped polymer chains of varying degrees of functionality (number of arms) have been modeled by connecting different linear arms at their ends instead of connecting them to a single central monomer, ensuring equal flexibility of the arms for all the functionalities.
The solvent is modeled explicitly as an ensemble of non-interacting point particles of finite mass (m) using a mesoscopic coarse-grained simulation technique, MPCD<cit.>. MPCD consists of alternating streaming and collision steps. In the streaming step, the MPCD particles with velocity v_i undergo ballistic motion and their positions (r_i) are updated as:
r_i( t + δ t) = r_i(t) + δ t v_i(t)
In the collision step, the simulation box is divided into cubic cells of equal size(a), and all the particles within a cell undergo stochastic collision. The collision of the MPCD particles is modeled using a momentum-conserving version of the Andersen Thermostat, also known as MPCD-AT<cit.>, in which the particle velocities (v_i) are updated as:
v_i(t + δ t) = v_cm(t) + v_i^ran - Δv_cm^ran
where v_cm is the center-of-mass velocity of the collision cell, v_i^ran is a random velocity selected from a Maxwell-Boltzmann distribution, and Δv_cm^ran is the change in center-of-mass velocity of the collision cell due to the addition of v_i^ran. During the streaming interval of MPCD, the positions, and velocities of the monomer beads evolve by the velocity-Verlet algorithm<cit.> with a time step δ t_MD. During the collision step, the monomers are considered MPCD particles and undergo stochastic collisions. The three components of v_i^ran are selected from a Gaussian distribution with variance k_BT/m for the solvent particles and k_BT/M for the monomer beads, where M is the mass of a monomer. This way of considering the monomers just like other MPCD particles in the collision step for modeling solvent-monomer interaction is often used in recent studies<cit.><cit.><cit.><cit.><cit.> due to its advantage of avoiding spurious depletion forces<cit.> which could lead to breakage of FENE bonds. Galilean invariance is ensured by randomly shifting the cells before each collision step by a vector with the three components randomly chosen from [-a/2,a/2]<cit.>. All the simulations have been performed using the MPCD-AT routines in LAMMPS<cit.>(Chen et al.<cit.> <cit.>).
The size of the collision cells is taken to be the same as the size of the monomer beads, a = σ_p. The average density of the MPCD solvent equals 5m/σ_p^3. The mass of a monomer(M) is taken as 5m to achieve neutral buoyancy. The MD time step (δ t_MD) equals 0.002τ. The MPCD collision time step (δ t) is 0.09τ, where τ is the intrinsic unit of time equals √(mσ_p^2/k_BT). The resulting viscosity and Schmidt number(Sc) of the MPCD fluid are 4 k_BT/σ_p^3 and 12, respectively. The size of the cubic simulation box is increased with polymer chain length to avoid the finite size effects following the previous studies.<cit.><cit.> Box size(L) equals 32σ_p for the set of chain lengths { 24σ_p, 36 σ_p, 48σ_p } , 48 σ_p for the set of chain lengths { 60σ_p, 84 σ_p }, and 64σ_p for the set of chain lengths { 108σ_p, 192 σ_p }. The equilibration simulation run is performed for 2×10^6 MD time steps. The results are time averaged over 5×10^8 MD time steps and ensemble-averaged over five system replicas, each with a unique set of random velocities at starting of the simulation and during the stochastic collision, both taken from Maxwell-Boltzmann distribution. The measured parameters will be expressed in reduced units using the energy scale k_BT, length scale σ_p, and mass scale m. Periodic boundary conditions are implemented in all directions. The snapshots of the simulations are shown in Figure <ref>.
To validate the MPCD routines, the Brownian motion of 250 colloidal particles of the same size as the monomer beads are modeled in the MPCD solvent. The variation of their average mean square displacement (MSD) with lag time (Δ t) is plotted in Figure <ref>(a). Typically, power law describes the variation of MSD with lag time: MSD ∝Δ t^b. The dynamics of the solutes can be diffusive (b=1), sub-diffusive (b<1), or super-diffusive (b>1). From Figure <ref>(a), we note that we obtain b = 1 as expected for the colloids. Hence, the dynamics of the colloids are captured well by the simulation. Further, the radius of gyration(R_g) vs. chain length (N) is plotted for the linear chain in Figure <ref>(b). A power law behavior can be observed with a scaling exponent of 0.623 for the linear chain. The value of the power law exponent reported by Chen et al.<cit.> using similar simulation parameters is 0.61. Linear chains have been studied widely, and their corresponding scaling exponent values for good solvent conditions are summarized in Table <ref>. The exponent value calculated in the present work agrees well with earlier studies. In addition, the diffusion coefficient(D) is calculated from the center-of-mass mean square displacement(MSD) vs. lag time plot for the linear chains of different lengths. The variation of D with N is shown in Figure <ref>(b). It also follows a power law D ∼ N^-ν_d, where ν_d = 0.622. The equality of ν with ν_d confirms the Zimm theory for the diffusion of a polymer chain with intra-chain hydrodynamic interactions, which predicts D ∼1/R_g.
The difference between a good solvent and a poor solvent can be demonstrated by introducing attraction using the standard 12-6 LJ potential with a cutoff of 2.5σ_p for the pairwise interaction between monomer beads instead of WCA potential as explained by Peng et al.<cit.>. In a good solvent, the polymer chain forms a coil, whereas it becomes a globule in the case of a poor solvent. This coil-to-globule transition is shown in Figure <ref> and can be observed by measuring the radius of gyration, which is approximately 5σ_p in a good solvent for the chain length of 56σ_p, contrary to 2σ_p for the poor solvent case. This reduction in the size of the polymer chain is also visible in the values of the resulting diffusion coefficients, which are 0.0061σ_p^2/τ and 0.0032σ_p^2/τ for poor and good solvents, respectively.
§ RESULTS AND DISCUSSION
§.§ Shape and size of star-shaped chains
The gyration tensor(S) of a polymer chain is defined as the dyadic product of the position vector(P) of a monomer bead in the center-of-mass reference frame with its transpose and averaged over all the monomers of the chain<cit.>.
S = 1/NPP^T, P = [ x_1^i - x_1^cm; x_2^i - x_2^cm; x_3^i - x_3^cm ]
where (x_1^cm,x_2^cm,x_3^cm) represents the centre-of-mass of the polymer chain consisting of N identical monomers (x_1^i,x_2^i,x_3^i) and is calculated as follows:
x_1^cm = 1/N∑_i=1^Nx_1^i,
x_2^cm = 1/N∑_i=1^Nx_2^i,
x_3^cm = 1/N∑_i=1^Nx_3^i
The elements of S can be written using indicial notation as:
S_pq = 1/N∑_i=1^N (x_p^i - x_p^cm)(x_q^i - x_q^cm)
The polymer chain's shape and size can be easily measured by the eigenvalues of S, i.e., λ_1, λ_2, and λ_3. The radius of gyration(R_g) represents the size of the polymer chain, the square of which is equal to the trace of the gyration tensor<cit.>.
R_g^2 = Tr(S) = λ_1 + λ_2 + λ_3
We evaluate R_g for four different types of chain: linear, 3-armed star, 4-armed star, and 6-armed star using seven different chain lengths, and the results are summarized in Figure <ref>(a). The radius of gyration follows a power law with polymer chain length, R_g ∼ N^ν. The power law's exponent(ν) represents the quality of the solvent. The value of ν calculated in the present simulations are 0.623, 0.627, 0.631, and 0.626 for the linear, 3-armed star, 4-armed star, and 6-armed star chains, respectively. The scaling law is found to be independent of the functionality of the star-shaped chains in which the average value of ν∼ 0.627, indicating similar scaling behavior of the linear and star-shaped chains under good solvent conditions<cit.><cit.><cit.>. Table <ref> summarizes the values of ν for linear chains reported by experiments, simulation, and theory. The calculated average value of ν is in good agreement with the existing results in the literature. For the same chain length, linear chains are bigger than the star-shaped chains, and among the star-shaped chains, size decreases with an increase in functionality, i.e. number of arms, as expected. Compared to the linear chain, this reduction in the size of the branched chains for the same polymer chain length is measured using the geometrical shrinking factor(g_s) defined as the ratio of the mean squared radius of gyration of the branched chain to that of the linear chain, g_s = ⟨ R_g,b^2 ⟩/⟨ R_g,l^2 ⟩<cit.>. The values of g_s of star chains with different functionality and chain lengths are shown in Figure <ref>(b). We note that g_s doesn't vary much over the chain length for a particular chain type. The values of g_s for star chain with f=5 reported by Khabaz and Khare<cit.> is approximately 0.5, which falls between 0.41(6-armed star) and 0.58(4-armed star) calculated in the present work and suggests linear variation in g_s with f.
The shape of the polymer chains can be calibrated using asphericity(b) and relative shape anisotropy(κ^2) defined as,<cit.><cit.>
b = λ_1 - ( λ_2 + λ_3/2), λ_1 ≥λ_2 ≥λ_3
κ^2 = 1 - 3 λ_1 λ_2 + λ_2 λ_3 + λ_3 λ_1/(λ_1 + λ_2 + λ_3)^2
The variation of the two shape parameters with the polymer chain length is given in figure <ref>. The asphericity values are normalized by R_g^2 to make these independent of size. For the individual architectures, the values of both shape parameters do not vary much over the chain lengths. The normalized asphericity can take any value between 0 and 1. It will be 0 for a spherical shape or any shape of the platonic solids and 1 for a rod-like structure. As expected, the linear chains have higher asphericity values than the star-shaped chains. Its value is ∼ 0.64 for the linear chain in the present work, which is close to 0.625(calculated using eigenvalues), reported by Koyama<cit.> and 0.66, reported by Theodorou and Suter<cit.>. Among the star-shaped chains, the normalized asphericity decreases with an increase in functionality. Khabaz and Khare<cit.> have reported b/R_g^2 ∼ 0.362 for the 5-armed star chain, which falls between 0.29(6-armed star) and 0.4(4-armed star) in the present work. The other shape parameter is the relative shape anisotropy(κ^2), which also varies between 0 and 1. Its value is 1 for a rigid rod and 0 for a sphere and all platonic solids. It is also higher for linear chains than the star-shaped ones, as expected. In the present work, the value of κ^2 for the linear chain is ∼ 0.45, which is nearly the same as 0.44 reported by Khabaz and Khare<cit.>. As anticipated, for star-shaped chains, κ^2 decreases with increasing functionality. In the present work, κ^2 ∼ 0.3(3-armed star), 0.21(4-armed star), and 0.12(6-armed star) which are slightly lower than 0.3454(3-armed star), 0.2463(4-armed star), and 0.1512(6-armed star), respectively, reported by Zifferer<cit.>. The value of κ^2 reported by Khabaz and Khare<cit.> for a 5-armed star chain is ∼ 0.16, which falls between 0.12(6-armed star) and 0.21(4-armed star) calculated in the present work. To summarize, linear chains are less spherical and more anisotropic than the star-shaped chains. The higher the functionality among the star-shaped chains, the more spherical and less anisotropic the chain is. The variation of the shape parameters with functionality is plotted in Figure <ref>(a) and correlated with the diffusion coefficients in a later section.
§.§ Translational diffusion of star-shaped chains
The diffusion rate of a polymer chain in a solution can be measured by the variation of center-of-mass mean square displacement(MSD) with lag time. One linear and five star-shaped chains are modeled to investigate the influence of the shape of the polymer chain on its diffusion. The effect of chain size is eliminated by selecting the polymer's chain length (N) such that the resulting radius of gyration is approximately 5σ_p for all six types. The simulation box size is 48σ_p for all six cases. The MSD(Δ r^2) vs. lag time(Δ t) plot is shown in Figure <ref>. At short times(less than 400 τ), MSD increases at a stronger rate than linearly with time due to the inertia of the chain. For longer times, the MSD reaches the linear diffusive regime, from which diffusion coefficients(D) are calculated using the relation, Δ r^2 = 6D Δ t and summarized in the second last column of Table <ref>.
The linear chain can be considered a star chain with f=2 and has the highest value of diffusion coefficient. Among the star chains, the diffusion coefficient value decreases with increased functionality. Using the values of the diffusion coefficients, the hydrodynamic radius of the polymer chains can be calculated as<cit.>,
R_H = k_BT/6 πη D
where D is the translational diffusion coefficient of the polymer chain, and η is the solvent viscosity. The ratio of the radius of gyration and hydrodynamic radius, ρ = R_g/R_H, is a size-independent quantity and represents the effect of the architecture of the polymer chain on its diffusion. The variation of ρ with the functionality of the star polymers is plotted in Figure <ref>. We note that ρ decreases with an increase in the functionality of the star chain, as reported by Huber et al. <cit.> and Singh et al. <cit.>. Since all six types of chains are of the same size, this difference in the diffusion coefficient values can only be attributed to their shape parameters. In Figure <ref>, we have shown that the linear chain is more anisotropic and less spherical than the star-shaped chains, and among the star chains, κ^2 and b/R_g^2 decrease with increased functionality. Hence, the higher a star chain's relative shape anisotropy and normalized asphericity, the faster it diffuses along the translational direction. We investigate this further by computing the rotational diffusion of the polymer chains.
§.§ Rotational diffusion of star-shaped chains
The polymer chain reorients itself continuously in the solution while diffusing along the translational direction. The gyration tensor has real eigenvalues and orthogonal eigenvectors as it is a symmetric tensor approximating the polymer chain as an ellipsoidal shape<cit.>. The reorientation of the polymer chain is equivalent to the rotation of the imaginary ellipsoid, as explained using a schematic representation in Figure <ref>. Any vector rigidly attached to the polymer chain can be used for measuring the rate of reorientation. In this work, the eigenvector(e_1) corresponding to the largest eigenvalue(λ_1) of the gyration tensor is selected for measuring the rate of reorientation of the corresponding polymer chain. The relevant reorientational correlation function of the polymer chain can be defined as,
C(t) = ⟨ P_2(e_1(0).e_1(t)) ⟩
where P_2(x)=(3x^2 - 1)/2, is the second-order Legendre polynomial, and the angle bracket represents the time and ensemble average over five system replicas. For any isotropically reorienting polymer chain, following Wong et al.<cit.>, the reorientational correlation function can be approximated as,
C(t) = e^-6D_Rt
where D_R is the rotational diffusion coefficient of the polymer chain.
The variation of the reorientational correlation function with time is plotted in Figure <ref>. We note that C(t) decays faster for the star-shaped chains than the linear chain. For the star-shaped chains, the higher the functionality, the faster the decay of C(t). The rotational diffusion coefficients are calculated from the exponential fit using the least square method and are summarized in the last column of Table <ref>. The corresponding coefficient of determination(R^2) is more than 0.99 for all the cases. The faster the decay of the reorientational correlation function, the higher the value of D_R. The linear chain has the lowest value of D_R, and among the star chains, D_R increases with increased functionality. As discussed earlier for translational diffusion, this difference in the values of the rotational diffusion coefficient is because of the shape parameters, as all the six types of polymer chains considered here are of the same size. In terms of shape parameters, star polymer chains with lower values of relative shape anisotropy and normalized asphericity have a higher rate of rotational diffusion. The lower values of κ^2 and b/R_g^2 represent higher symmetry of monomer distribution with respect to the coordinate axes. It is intuitive that a star-shaped chain reorients faster when the distribution of its monomers is symmetrical with respect to the coordinate axes. It is to be noted that the variation in the rotational diffusion coefficient with functionality and shape parameters is opposite to that of the translational diffusion coefficient.
§.§ Correlation of diffusion and shape parameters of star-shaped chains
The variation of the shape parameters and the two types of diffusion coefficients with the functionality of the chains are plotted in Figure <ref>(a) and Figure <ref>(b), respectively, where the linear chain is considered a star chain with f = 2. Out of the two shape parameters, relative shape anisotropy(κ^2) can be expressed in terms of the invariants of the gyration tensor and is the overall measure of shape anisotropy<cit.>. By making two-to-one correspondence between the two types of diffusion coefficients in Figure <ref>(b) and the relative shape anisotropy in Figure <ref>(a), it can be stated that, for star-shaped chains with higher κ^2 values, the value of D is higher, and the value of D_R is lower. The origin of a polymer chain's translational and rotational diffusive motion is the collision with the surrounding solvent particles. The radius of gyration can be interpreted as the radius of the imaginary sphere surrounding the polymer chain in the solution. Maintaining the same R_g for all six types of chains leads to the same size of the corresponding imaginary sphere. Therefore, all six types of chains interact with an approximately equal number of solvent particles on average. Highly spherical and isotropic star-shaped polymer chains utilize more energy in rotational diffusion, which results in less energy for diffusing along the translational direction. The opposite is the case for highly anisotropic star-shaped chains. Hence, the higher the relative shape anisotropy value of a star-shaped chain, the slower the rotational diffusion rate and the faster the rate of translational diffusion, as shown in Figure <ref>(b).
The variation of the translational diffusion coefficient and the rotational diffusion coefficient with relative shape anisotropy of the star-shaped chains having the same value of R_g is shown in Figure <ref>. The higher values of κ^2 lead to a lower value of the rotational diffusion coefficient and a higher value of the translational diffusion coefficient. From these results, we conclude that a star polymer chain with a higher value of relative shape anisotropy will have a slower rate of rotational diffusion and diffuse faster in the translational direction. Even though this is demonstrated using star-shaped chains, the argument can be extended to other polymer configurations as well. Hegde et al. have reported that the linear chains have higher translational diffusion coefficients than the ring chains when both have the same radius of gyration<cit.>. From the definition of relative shape anisotropy(equation <ref>), it is intuitive that the linear polymer chain will have a higher value of κ^2 than the ring polymer chain. Hence, our argument also holds for the ring vs. linear case. Nevertheless, to verify this argument for generic polymer configurations, the study of other polymer chain architectures is essential.
§ CONCLUSION
In this work, the Brownian diffusion of the linear and star-shaped polymer chains of different functionalities is simulated using MPCD. It is shown that the radius of gyration of the star-shaped polymer chains follows a functionality-independent scaling law with chain length, in which the scaling exponent ν∼ 0.627. The linear chain is shown to be more anisotropic than the star-shaped chains, and for star-shaped chains, the value of relative shape anisotropy decreases with an increase in functionality. For the same radius of gyration, the linear chain diffuses at a faster rate along the translational direction and has a slower rate of rotational diffusion than the star-shaped chains. Among star-shaped chains with the same radius of gyration, higher functionality leads to a higher value of rotational diffusion coefficient and a slower rate of diffusion along the translational direction. In terms of the shape parameter, we conclude that the star-shaped chains with higher values of relative shape anisotropy have a slower rate of rotational diffusion and therefore diffuse at a faster rate along the translational direction. Hence, shape anisotropy leads to faster center-of-mass diffusion of star-shaped polymer chains in a solution.
G.T. acknowledges partial support from the Department of Science and Technology National Supercomputing Mission HPC system in the Supercomputing Education and Research Center-Indian Institute of Science. A.K. acknowledges partial support from SERB CRG/2022/005381. P.K.P. acknowledges partial support from the Ministry of Education, Government of India.
|
http://arxiv.org/abs/2307.04722v1 | 20230710173215 | Advances and Challenges in Meta-Learning: A Technical Review | [
"Anna Vettoruzzo",
"Mohamed-Rafik Bouguelia",
"Joaquin Vanschoren",
"Thorsteinn Rögnvaldsson",
"KC Santosh"
] | cs.LG | [
"cs.LG"
] |
Quark/Gluon Discrimination and Top Tagging with Dual Attention Transformer
Minxuan Hee1,addr1,addr2
Daohan Wange2,addr3
==========================================================================
Meta-learning empowers learning systems with the ability to acquire knowledge from multiple tasks, enabling faster adaptation and generalization to new tasks. This review provides a comprehensive technical overview of meta-learning, emphasizing its importance in real-world applications where data may be scarce or expensive to obtain. The paper covers the state-of-the-art meta-learning approaches and explores the relationship between meta-learning and multi-task learning, transfer learning, domain adaptation and generalization, self-supervised learning, personalized federated learning, and continual learning. By highlighting the synergies between these topics and the field of meta-learning, the paper demonstrates how advancements in one area can benefit the field as a whole, while avoiding unnecessary duplication of efforts. Additionally, the paper delves into advanced meta-learning topics such as learning from complex multi-modal task distributions, unsupervised meta-learning, learning to efficiently adapt to data distribution shifts, and continual meta-learning.
Lastly, the paper highlights open problems and challenges for future research in the field. By synthesizing the latest research developments, this paper provides a thorough understanding of meta-learning and its potential impact on various machine learning applications. We believe that this technical overview will contribute to the advancement of meta-learning and its practical implications in addressing real-world problems.
Keywords: Meta-learning, transfer learning, few-shot learning, representation learning, deep neural networks
§ INTRODUCTION
§.§ Context and motivation
Deep representation learning has revolutionized the field of machine learning by enabling models to learn effective features from data. However, it often requires large amounts of data for solving a specific task, making it impractical in scenarios where data is scarce or costly to obtain. Most existing approaches rely on either supervised learning of a representation tailored to a single task, or unsupervised learning of a representation that captures general features that may not be well-suited to new tasks. Furthermore, learning from scratch for each task is often not feasible, especially in domains such as medicine, robotics, and rare language translation where data availability is limited.
To overcome these challenges, meta-learning has emerged as a promising approach. Meta-learning enables models to quickly adapt to new tasks, even with few examples, and generalize across them. While meta-learning shares similarities with transfer learning and multitask learning, it goes beyond these approaches by enabling a learning system to learn how to learn. This capability is particularly valuable in settings where data is scarce, costly to obtain, or where the environment is constantly changing. While humans can rapidly acquire new skills by leveraging prior experience and are therefore considered generalists, most deep learning models are still specialists and are limited to performing well on specific tasks. Meta-learning bridges this gap by enabling models to efficiently adapt to new tasks.
§.§ Contribution
This review paper primarily discusses the use of meta-learning techniques in deep neural networks to learn reusable representations, with an emphasis on few-shot learning; it does not cover topics such as AutoML and Neural Architecture Search <cit.>, which are out of scope.
Distinct from existing surveys on meta-learning, such as <cit.>, this review paper highlights several key differentiating factors:
* Inclusion of advanced meta-learning topics. In addition to covering fundamental aspects of meta-learning, this review paper delves into advanced topics such as learning from multimodal task distributions, meta-learning without explicit task information, learning without data sharing among clients, adapting to distribution shifts, and continual learning from a stream of tasks. By including these advanced topics, our paper provides a comprehensive understanding of the current state-of-the-art and highlights the challenges and opportunities in these areas.
* Detailed exploration of relationship with other topics. We not only examine meta-learning techniques but also establish clear connections between meta-learning and related areas, including transfer learning, multitask learning, self-supervised learning, personalized federated learning, and continual learning. This exploration of the relationships and synergies between meta-learning and these important topics provides valuable insights into how meta-learning can be efficiently integrated into broader machine learning frameworks.
* Clear and concise exposition. Recognizing the complexity of meta-learning, this review paper provides a clear and concise explanation of the concepts, techniques and applications of meta-learning. It is written with the intention of being accessible to a wide range of readers, including both researchers and practitioners. Through intuitive explanations, illustrative examples, and references to seminal works, we facilitate readers' understanding of the foundation of meta-learning and its practical implications.
* Consolidation of key information. As a fast-growing field, meta-learning has information scattered across various sources. This review paper consolidates the most important and relevant information about meta-learning, presenting a comprehensive overview in a single resource. By synthesizing the latest research developments, this survey becomes an indispensable guide to researchers and practitioners seeking a thorough understanding of meta-learning and its potential impact on various machine learning applications.
By highlighting these contributions, this paper complements existing surveys and offers unique insights into the current state and future directions of meta-learning.
§.§ Organization
In this paper, we provide the foundations of modern deep learning methods for learning across tasks. To do so, we first define the key concepts and introduce relevant notations used throughout the paper in section <ref>. Then, we cover the basics of multitask learning and transfer learning and their relation to meta-learning in section <ref>. In section <ref>, we present an overview of the current state of meta-learning methods and provide a unified view that allows us to categorize them into three types: black-box meta-learning methods, optimization-based meta-learning methods, and meta-learning methods that are based on distance metric learning <cit.>. In section <ref>, we delve into advanced meta-learning topics, explaining the relationship between meta-learning and other important machine learning topics, and addressing issues such as learning from multimodal task distributions, performing meta-learning without provided tasks, learning without sharing data across clients, learning to adapt to distribution shifts, and continual learning from a stream of tasks. Finally, the paper explores the application of meta-learning to real-world problems and provides an overview of the landscape of promising frontiers and yet-to-be-conquered challenges that lie ahead. Section <ref> focuses on these challenges, shedding light on the most pressing questions and future research opportunities.
§ BASIC NOTATIONS AND DEFINITIONS
In this section, we introduce some simple notations which will be used throughout the paper and provide a formal definition of the term “task" within the scope of this paper.
We use θ (and sometimes also ϕ) to represent the set of parameters (weights) of a deep neural network model. 𝒟 = { (x_j, y_j) }_j=1^n denotes a dataset, where inputs x_j are sampled from the distribution p(x) and outputs y_j are sampled from p(y|x). The function ℒ(., .) denotes a loss function, for example, ℒ(θ, 𝒟) represents the loss achieved by the model's parameters θ on the dataset 𝒟. The symbol 𝒯 refers to a task, which is primarily defined by the data-generating distributions p(x) and p(y|x) that define the problem.
In a standard supervised learning scenario, the objective is to optimize the parameters θ by minimizing the loss ℒ(θ, 𝒟), where the dataset 𝒟 is derived from a single task 𝒯, and the loss function ℒ depends on that task. Formally, in this setting, a task 𝒯_i is a triplet 𝒯_i ≜{ p_i(x), p_i(y|x), ℒ_i } that includes task-specific data-generating distributions p_i(x) and p_i(y|x), as well as a task-specific loss function ℒ_i. The goal is to learn a model that performs well on data sampled from task 𝒯_i. In a more challenging setting, we consider learning from multiple tasks {𝒯_i }_i=1^T, which involves (a dataset of) multiple datasets {𝒟_i }_i=1^T.
In this scenario, a set of training tasks is used to learn a model that performs well on test tasks. Depending on the specific setting, a test task can either be sampled from the training tasks or completely new, never encountered during the training phase.
In general, tasks can differ in various ways depending on the application. For example, in image recognition, different tasks can involve recognizing handwritten digits or alphabets from different languages <cit.>, while in natural language processing, tasks can include sentiment analysis <cit.>, machine translation <cit.>, and chatbot response generation <cit.>. Tasks in robotics can involve training robots to achieve different goals <cit.>, while in automated feedback generation, tasks can include providing feedback to students on different exams <cit.>. It is worth noting that tasks can share structures, even if they appear unrelated. For example, the laws of physics underlying real data, the language rules underlying text data, and the intentions of people all share common structures that enable models to transfer knowledge across seemingly unrelated tasks.
§ FROM MULTITASK AND TRANSFER TO META-LEARNING
Meta-learning, multitask learning, and transfer learning encompass different approaches aimed at learning across multiple tasks. Multitask learning aims to improve performance on a set of tasks by learning them simultaneously. Transfer learning fine-tunes a pre-trained model on a new task with limited data. In contrast, meta-learning acquires useful knowledge from past tasks and leverages it to learn new tasks more efficiently. In this section, we transition from discussing “multitask learning" and “transfer learning" to introducing the topic of “meta-learning".
§.§ Multitask learning problem
As illustrated in Figure <ref> (A), multitask learning (MTL) trains a model to perform multiple related tasks simultaneously, leveraging shared structure across tasks, and improving performance compared to learning each task individually. In this setting, there is no distinction between training and test tasks, and we refer to them as {𝒯_i}_i=1^T.
One common approach in MTL is hard parameter sharing, where the model parameters θ are split into shared θ^sh and task-specific θ^i parameters. These parameters are learned simultaneously through an objective function that takes the form:
min_θ^sh, θ^1, …, θ^T∑_i=1^T w_i ℒ_i({θ^sh, θ^i }, 𝒟_i),
where w_i can weight tasks differently. This approach is often implemented using a multi-headed neural network architecture, where a shared encoder (parameterized by θ^sh) is responsible for feature extraction. This shared encoder subsequently branches out into task-specific decoding heads (parameterized by θ^i) dedicated to individual tasks 𝒯_i <cit.>.
Soft parameter sharing is another approach in MTL that encourages parameter similarity across task-specific models using regularization penalties <cit.>. In this approach, each task typically has its own model with its own set of parameters θ^i, while the shared parameters set θ^sh can be empty. The objective function is similar to that of hard parameter sharing, but with an additional regularization term that controls the strength of parameter sharing across tasks. The strength of regularization is determined by the hyperparameter λ. In the case of L_2 regularization, the objective function is given by:
min_θ^sh, θ^1, …, θ^T∑_i=1^T w_i ℒ_i({θ^sh, θ^i }, 𝒟_i) + λ∑_i'=1^T ‖θ^i - θ^i'‖.
However, soft parameter sharing can be more memory-intensive as separate sets of parameters are stored for each task, and it requires additional design decisions and hyperparameters.
Another approach to sharing parameters is to condition a single model on a task descriptor z_i that contains task-specific information used to modulate the network's computation. The task descriptor z_i can be a simple one-hot encoding of the task index or a more complex task specification, such as language description or user attributes. When a task descriptor is provided, it is used to modulate the weights of the shared network with respect to the task at hand. Through this modulation mechanism, the significance of the shared features is determined based on the particular task, enabling the learning of both shared and task-specific features in a flexible manner. Such an approach grants fine-grained control over the adjustment of the network's representation, tailoring it to each individual task.
Various methods for conditioning the model on the task descriptor are described in <cit.>. More complex methods are also provided in <cit.>.
Choosing the appropriate approach for parameter sharing, determining the level of the network architecture at which to share parameters, and deciding on the degree of parameter sharing across tasks are all design decisions that depend on the problem at hand. Currently, these decisions rely on intuition and knowledge of the problem, making them more of an art than a science, similar to the process of tuning neural network architectures. Moreover, multitask learning presents several challenges, such as determining which tasks are complementary, particularly in scenarios with a large number of tasks, as in <cit.>. Interested readers can find a more comprehensive discussion of multitask learning in <cit.>.
In summary, multitask learning aims to learn a set of T tasks {𝒯_i }_i=1^T at once. Even though the model can generalize to new data from these T tasks, it might not be able to handle a completely new task that it has not been trained on. This is where transfer learning and meta-learning become more relevant.
§.§ Transfer learning via fine-tuning
Transfer learning is a valuable technique that allows a model to leverage representations learned from one or more source tasks to solve a target task. As illustrated in Figure <ref> (B), the main goal is to use the knowledge learned from the source task(s) 𝒯_a to improve the performance of the model on a new task, usually referred to as the target task 𝒯_b, especially when the target task dataset 𝒟_b is limited. In practice, the source task data 𝒟_a is often inaccessible, either because it is too expensive to obtain or too large to store.
One common approach for transfer learning is fine-tuning, which involves starting with a model that has been pre-trained on the source task dataset 𝒟_a. The parameters of the pre-trained model, denoted as θ, are then fine-tuned on the training data 𝒟_b from the target task 𝒯_b using gradient descent or any other optimizer for several optimization steps. An example of the fine-tuning process for one gradient descent step is expressed as follows:
ϕ←θ - α∇_θℒ(θ, 𝒟_b),
where ϕ denotes the parameters fine-tuned for task 𝒯_b, and α is the learning rate.
Models with pre-trained parameters θ are often available online, including models pre-trained on large datasets such as ImageNet for image classification <cit.> and language models like BERT <cit.>, PaLM <cit.>, LLaMA <cit.>, and GPT-4 <cit.>, trained on large text corpora. Models pre-trained on other large and diverse datasets or using unsupervised learning techniques, as discussed in section <ref>, can also be used as a starting point for fine-tuning.
However, as discussed in <cit.>, it is crucial to avoid destroying initialized features when fine-tuning. Some design choices, such as using a smaller learning rate for earlier layers, freezing earlier layers and gradually unfreezing, or re-initializing the last layer, can help to prevent this issue. Recent studies such as <cit.> show that fine-tuning the first or middle layers can sometimes work better than fine-tuning the last layers, while others recommend a two-step process of training the last layer first and then fine-tuning the entire network <cit.>. More advanced approaches, such as STILTs <cit.>, propose an intermediate step of further training the model on a labeled task with abundant data to mitigate the potential degradation of pre-trained features.
In <cit.>, it was demonstrated that transfer learning via fine-tuning may not always be effective, particularly when the target task dataset is very small or very different from the source tasks. To investigate this, the authors fine-tuned a pre-trained universal language model on specific text corpora corresponding to new tasks using varying numbers of training examples. Their results showed that starting with a pre-trained model outperformed training from scratch on the new task. However, when the size of the new task dataset was very small, fine-tuning on such a limited number of examples led to poor generalization performance. To address this issue, meta-learning can be used to learn a model that can effectively adapt to new tasks with limited data by leveraging prior knowledge from other tasks. In fact, meta-learning is particularly useful for learning new tasks from very few examples, and we will discuss it in more detail in the remainder of this paper.
§.§ Meta-learning problem
Meta-learning (or learning to learn) is a field that aims to surpass the limitations of traditional transfer learning by adopting a more sophisticated approach that explicitly optimizes for transferability.
As discussed in section <ref>, traditional transfer learning involves pre-training a model on source tasks and fine-tuning it for a new task. In contrast, meta-learning trains a network to efficiently learn or adapt to new tasks with only a few examples. Figure <ref> (C) illustrates this approach, where at meta-training time we learn to learn tasks, and at meta-test time we learn a new task efficiently.
During the meta-training phase, prior knowledge enabling efficient learning of new tasks is extracted from a set of training tasks {𝒯_i }_i=1^T. This is achieved by using a meta-dataset consisting of multiple datasets {𝒟_i }_i=1^T, each corresponding to a different training task. At meta-test time, a small training dataset 𝒟_new is observed from a completely new task 𝒯_new and used in conjunction with the prior knowledge to infer the most likely posterior parameters. As in transfer learning, accessing prior tasks at meta-test time is impractical. Although the datasets {𝒟_i }_i come from different data distributions (since they come from different tasks {𝒯_i }_i), it is assumed that the tasks themselves (both for training and testing) are drawn i.i.d. from an underlying task distribution p(𝒯), implying some similarities in the task structure. This assumption ensures the effectiveness of meta-learning frameworks even when faced with limited labeled data.
Moreover, the more tasks that are available for meta-training, the better the model can learn to adapt to new tasks, just as having more data improves performance in traditional machine learning.
In the next section, we provide a more formal definition of meta-learning and various approaches to it.
§ META-LEARNING METHODS
To gain a unified understanding of the meta-learning problem, we can draw an analogy to the standard supervised learning setting. In the latter, the goal is to learn a set of parameters ϕ for a base model h_ϕ (e.g., a neural network parametrized by ϕ), which maps input data x ∈𝒳 to the corresponding output y ∈𝒴 as follows:
h_ϕ𝒳𝒴 x y = h_ϕ(x).
To accomplish this, a typically large training dataset 𝒟 = { (x_j, y_j) }_j=1^n specific to a particular task 𝒯 is used to learn ϕ.
In the meta-learning setting, the objective is to learn prior knowledge, which consists of a set of meta-parameters θ, for a procedure ℱ_θ(𝒟_i^tr, x^ts). This procedure uses θ to efficiently learn from (or adapt to) a small training dataset 𝒟_i^tr = { (x_k, y_k) }_k=1^K from a task 𝒯_i, and then make accurate predictions on unlabeled test data x^ts from the same task 𝒯_i.
As we will see in the following sections, ℱ_θ is typically composed of two functions:
(1) a meta-learner f_θ(.) that produces task-specific parameters ϕ_i ∈Φ from 𝒟_i^tr∈𝒳^K, and (2) a base model h_ϕ_i(.) that predicts outputs corresponding to the data in x^ts:
f_θ𝒳^K Φ𝒟_i^trϕ_i = f_θ(𝒟_i^tr), h_ϕ_i𝒳𝒴 x y = h_ϕ_i(x).
Note that the process of obtaining task-specific parameters ϕ_i = f_θ(𝒟_i^tr) is often referred to as “adaptation" in the literature, as it adapts to the task 𝒯_i using a small amount of data while leveraging the prior knowledge summarized in θ.
The objective of meta-training is to learn the set of meta-parameters θ. This is accomplished by using a meta-dataset {𝒟_i }_i=1^T, which consists of a dataset of datasets, where each dataset 𝒟_i = { (x_j, y_j) }_j=1^n is specific to a task 𝒯_i.
The unified view of meta-learning presented here is beneficial because it simplifies the meta-learning problem by reducing it to the design and optimization of ℱ_θ. Moreover, it facilitates the categorization of the various meta-learning approaches into three categories: black-box meta-learning methods, optimization-based meta-learning methods, and distance metric-based meta-learning methods (as discussed in <cit.>). An overview of these categories is provided in the subsequent sections.
§.§ Black-box meta-learning methods
Black-box meta-learning methods represent f_θ as a black-box neural network that takes the entire training dataset, 𝒟_i^tr, and predicts task-specific-parameters, ϕ_i. These parameters are then used to parameterize the base network, h_ϕ_i, and make predictions for test data-points, y^ts = h_ϕ_i(x^ts). The architecture of this approach is shown in Figure <ref>. The meta-parameters, θ, are optimized as shown in Equation <ref>, and a general algorithm for these kinds of black-box methods is outlined in Algorithm <ref>.
min_θ∑_𝒯_iℒ( f_θ(𝒟_i^tr)_ϕ_i, 𝒟_i^ts ).
However, this approach faces a major challenge: outputting all the parameters ϕ_i of the base network h_ϕ_i is not scalable and is impractical for large-scale models. To overcome this issue, black-box meta-learning methods, such as MANN <cit.> and SNAIL <cit.>, only output sufficient statistics instead of the complete set of parameters of the base network. These methods allow f_θ to output a low-dimensional vector z_i that encodes contextual task information, rather than a full set of parameters ϕ_i. In this case, ϕ_i consists of { z_i, θ_h }, where θ_h denotes the trainable parameters of the network h_ϕ_i. The base network h_ϕ_i is modulated with task descriptors by using various techniques for conditioning on task descriptors discussed in section <ref>.
Several black-box meta-learning methods adopt different neural network architectures to represent f_θ. For instance, methods described in <cit.>, use LSTMs or architectures with augmented memory capacities, such as Neural Turing Machines, while others, like Meta Networks <cit.>, employ external memory mechanisms. SNAIL <cit.> defines meta-learner architectures that leverage temporal convolutions to aggregate information from past experience and attention mechanisms to pinpoint specific pieces of information. Alternatively, some methods, such as the one proposed in <cit.>, use a feedforward plus averaging strategy. This latter feeds each data-point in 𝒟_i^tr = {(x_j, y_j)}_j=1^K through a neural network to produce a representation r_j for each data-point, and then averages these representations to create a task representation z_i = 1/K∑_j=1^K r_j. This strategy may be more effective than using a recurrent model such as LSTM, as it does not rely on the assumption of temporal relationships between data-points in 𝒟_i^tr.
Black-box meta-learning methods are expressive, versatile, and easy to combine with various learning problems, including classification, regression, and reinforcement learning. However, they require complex architectures for the meta-learner f_θ, making them computationally demanding and data-inefficient. As an alternative, one can represent ϕ_i = f_θ(𝒟_i^tr) as an optimization procedure instead of a neural network. The next section explores methods that utilize this approach.
§.§ Optimization-based meta-learning methods
Optimization-based meta-learning offers an alternative to the black-box approach, where the meta-learner f_θ is an optimization procedure like gradient descent, rather than a black-box neural network. The goal of optimization-based meta-learning is to acquire a set of meta-parameters θ that are easy to learn via gradient descent and to fine-tune on new tasks. Most optimization-based techniques do so by defining meta-learning as a bi-level optimization problem. At the inner level, f_θ produces task-specific parameters ϕ_i using 𝒟_i^tr, while at the outer level, the initial set of meta-parameters θ is updated by optimizing the performance of h_ϕ_i on the test set of the same task. This is shown in Figure <ref> and in Algorithm <ref> in case f_θ is a gradient-based optimization. The meta-parameters θ can represent inner optimizers <cit.>, neural network architectures <cit.>, other network hyperparameters <cit.>, or the initialization of the base model h(.) <cit.>. The latter approach is similar to transfer learning via fine-tuning (cf. section <ref>), but instead of using a pre-trained θ that may not be transferable to new tasks, we learn θ to explicitly optimize for transferability.
Model-Agnostic Meta-Learning (MAML) <cit.> is one of the earliest and most popular optimization-based meta-learning methods. The main idea behind MAML is to learn a set of initial neural network's parameters θ that can easily be fine-tuned for any task using gradient descent with only a few steps. During the meta-training phase, MAML minimizes the objective defined as follows:
min_θ∑_𝒯_iℒ( θ - α∇_θℒ(θ, 𝒟_i^tr) _ϕ_i, 𝒟_i^ts ).
Note that in Equation <ref>, the task-specific parameters ϕ_i are obtained through a single gradient descent step from θ, although in practice, a few more gradient steps are usually used for better performance.
As a result, MAML produces a model initialization θ that can be quickly adapted to new tasks with a small number of training examples. Algorithm <ref> can be viewed as a simplified illustration of MAML, where θ represents the parameters of a neural network. This is similar to Algorithm <ref> but with ϕ_i obtained through optimization.
During meta-test time, a small dataset 𝒟_new^tr is observed from a new task 𝒯_new∼ p(𝒯). The goal is to use the prior knowledge encoded in θ to train a model that generalizes well to new, unseen examples from this task. To achieve this, θ is fine-tuned with a few adaptation steps using ∇_θℒ(θ, 𝒟_new^tr), resulting in task-specific parameters ϕ. These parameters are then used to make accurate predictions on previously unseen input data from 𝒯_new.
MAML can be thought of as a computation graph (as shown in Figure <ref>) with an embedded gradient operator. Interestingly, the components of this graph can be interchanged or replaced with components from the black-box approach. For instance, <cit.> also learned an initialization θ, but adapted θ differently by using a learned network f_w(θ, 𝒟_i^tr, ∇_θℒ) instead of the gradient ∇_θℒ(θ, 𝒟_i^tr):
ϕ_i θ - α f_w(θ, 𝒟_i^tr, ∇_θℒ)
In <cit.>, the authors investigated the effectiveness of optimization-based meta-learning in generalizing to similar but extrapolated tasks that are outside the original task distribution p(𝒯). The study found that, as task variability increases, black-box meta-learning methods such as SNAIL <cit.> and MetaNet <cit.> acquire less generalizable learning strategies than gradient-based meta-learning approaches like MAML.
However, despite its success, MAML faces some challenges that have motivated the development of other optimization-based meta-learning methods.
One of these challenges is the instability of MAML's bi-level optimization. Fortunately, there are enhancements that can significantly improve optimization process. For instance, Meta-SGD <cit.> and AlphaMAML <cit.> learn a vector of learning rates α automatically, rather than using a manually set scalar value α. Other methods like DEML <cit.>, ANIL <cit.> and BOIL <cit.> suggest optimizing only a subset of the parameters during adaptation. Additionally, MAML++ <cit.> proposes various modifications to stabilize the optimization process and further improve the generalization performance. Moreover, Bias-transformation <cit.> and CAVIA <cit.> introduce context variables for increased expressive power, while <cit.> enforces a well-conditioned parameter space based on the concepts of the condition number <cit.>.
Another significant challenge in MAML is the computationally expensive process of backpropagating through multiple gradient adaptation steps. To overcome this challenge, first-order alternatives to MAML such as FOMAML and Reptile have been introduced <cit.>. For example, Reptile aims to find an initialization θ that is close to each task's optimal parameters. Another approach is to optimize only the parameters of the last layer. For instance, <cit.> and <cit.> perform a closed-form or convex optimization on top of meta-learned features. Another solution is iMAML <cit.>, which computes the full meta-gradient without differentiating through the optimization path, using the implicit function theorem.
§.§ Meta-learning via distance metric learning
In the context of low data regimes, such as in few-shot learning, simple non-parametric methods such as Nearest Neighbors <cit.> can be effective. However, black-box and optimization-based meta-learning approaches discussed so far in sections <ref> and <ref> have focused on using parametric base models, such as neural networks. In this section we discuss meta-learning approaches that employ a non-parametric learning procedure. The key concept is to use parametric meta-learners to produce effective non-parametric learners, thus eliminating the need for second-order optimization, as required by several methods discussed in section <ref>.
Suppose we are given a small training dataset 𝒟_i^tr that presents a 1-shot-N-way classification problem, i.e., N classes with only one labeled data-point per class, along with a test data-point x^ts. To classify x^ts, a Nearest Neighbor learner compares it with each training data-point in 𝒟_i^tr. However, determining an effective space and distance metric for this comparison can be challenging. For example, using the L_2 distance in pixel space for image data may not yield satisfactory results <cit.>. To overcome this, a distance metric can be derived by learning how to compare instances using meta-training data.
To learn an appropriate distance metric for comparing instances, a Siamese network <cit.> can be trained to solve a binary classification problem that predicts whether two images belong to the same class. During meta-test time, each image in 𝒟_i^tr is compared with the test image x^ts to determine whether they belong to the same class or not. However, there is a nuance due to the mismatch between the binary classification problem during meta-training and the N-way classification problem during meta-testing. Matching Networks, introduced in <cit.>, address this by learning an embedding space with a network f_θ and using Nearest Neighbors in the learned space, as shown in Figure <ref>. The network is trained end-to-end to ensure that meta-training is consistent with meta-testing. Algorithm <ref> outlines the meta-training process used by Matching Networks. It is similar to Algorithms <ref> and <ref>, except that the base model is non-parametric, so there is no ϕ_i (see lines <ref> and <ref>).
However, Matching Networks are specifically designed for 1-shot classification and cannot be directly applied to K-shot classification problems (where there are K labeled samples per class). To address this issue, other methods, such as Prototypical Networks <cit.>, have been proposed. Prototypical Networks aggregate class information to create a prototypical embedding, as illustrated in Figure <ref>. In Prototypical Networks, line <ref> of Algorithm <ref> is replaced with:
p_θ(y=l | x) = exp( -‖ f_θ(x) - c_l ‖ )/∑_l'exp( -‖ f_θ(x) - c_l'‖ ) ,
where c_l is the mean embedding of all the samples in the l-th class, i.e., c_l = 1/K∑_(x, y) ∈𝒟_i^tr1(y=l) f_θ(x).
While methods such as Siamese networks, Matching Networks, and Prototypical Networks can perform few-shot classification by embedding data and applying Nearest Neighbors <cit.>, they may not be sufficient to capture complex relationships between data-points. To address this, alternative approaches have been proposed. RelationNet <cit.> introduces a non-linear relation module that can reason about complex relationships between embeddings. Garcia et al. <cit.> propose to use graph neural networks to perform message passing on embeddings, allowing for the capture of more complex dependencies. Finally, Allen et al. <cit.> extend Prototypical Networks to learn an infinite mixture of prototypes, which improves the model's ability to represent the data distribution.
§.§ Hybrid approaches
Black-box, optimization-based, and distance metric-based meta-learning approaches define ℱ_θ(𝒟_i^tr, x^ts) differently, but these approaches are not mutually exclusive and they can be combined in various ways. For instance, in <cit.>, gradient descent is applied while conditioning on the data, allowing the model to modulate the feature representations and capture inter-class dependencies. In <cit.>, LEO (Latent Embedding Optimization) combines optimization-based meta-learning with a latent embedding produced by the RelationNet embedding proposed in <cit.>. The parameters of the model are first conditioned on the input data and then further adapted through gradient descent. In <cit.>, the strength of both MAML and Prototypical Networks are combined to form a hybrid approach called Proto-MAML. This approach exploits the flexible adaptation of MAML, while initializing the last layer with ProtoNet to provide a simple inductive bias that is effective for very-few-shot learning. Similarly, <cit.> proposes a model where the meta-learner operates using an optimization-based meta-model, while the base learner exploits a metric-based approach (either Matching Network or Prototypical Network). The distance metrics used by the base learner can better adapt to different tasks thanks to the weight prediction from the meta-learner.
In summary, researchers have explored combining black-box, optimization-based, and distance metric-based meta-learning approaches to take advantage of their individual strengths. These combined approaches aim to improve performance, adaptability, and generalization in few-shot learning tasks by integrating different methodologies.
§ ADVANCED META-LEARNING TOPICS
The field of meta-learning has seen rapid development in recent years, with numerous methods proposed for learning to learn from a few examples. In this section, we delve into advanced topics in meta-learning that extend the meta-learning paradigm to more complex scenarios. We explore meta-learning from multi-modal task distributions, the challenge of out-of-distribution tasks, and unsupervised meta-learning. Additionally, we examine the relationship between meta-learning and personalized federated learning, domain adaptation/generalization, as well as the intersection between meta-learning and continual learning. By delving into these advanced topics, we can gain a deeper understanding of the potential of meta-learning and its applications in more complex real-world scenarios.
§.§ Meta-learning from multimodal task distributions
Meta-learning methods have traditionally focused on optimizing performance within a unimodal task distribution p(𝒯), assuming that all tasks are closely related and share similarities within a single application domain. However, recent studies have highlighted the limitations of standard meta-learning approaches when faced with significantly different tasks <cit.>. In real-world scenarios, tasks are often diverse and sampled from a more complex task distribution with multiple unknown modes. The performance of most meta-learning approaches tends to deteriorate as the dissimilarity among tasks increases, indicating that a globally shared set of meta-parameters θ may not adequately capture the heterogeneity among tasks and enable fast adaptation.
To address this challenge, MMAML <cit.> builds upon the standard MAML approach by estimating the mode of tasks sampled from a multimodal task distribution p(𝒯) and adjusting the initial model parameters accordingly. Another approach proposed in <cit.> involves learning a meta-regularization conditioned on additional task-specific information. However, obtaining such additional task information may not always be feasible.
Alternatively, some methods propose learning multiple model initializations θ_1, θ_2, ⋯, θ_M and selecting the most suitable one for each task, leveraging clustering techniques applied in either the task-space or parameter-space <cit.>, or relying on the output of an additional network. CAVIA <cit.> partitions the initial model parameters into shared parameters across all tasks and task-specific context parameters, while LGM-Net <cit.> directly generates classifier weights based on an encoded task representation.
A series of related works (but outside of the meta-learning field) aim to build a “universal representation" that encompasses a robust set of features capable of achieving strong performance across multiple datasets (or modes) <cit.>. This representation is subsequently adapted to individual tasks in various ways. However, these approaches are currently limited to classification problems and do not leverage meta-learning techniques to efficiently adapt to new tasks.
A more recent line of research focuses on cross-domain
meta-learning, where knowledge needs to be transferred from tasks sampled from a potentially multimodal distribution p(𝒯) to target tasks sampled from a different distribution. One notable study, BOIL <cit.>, reveals that the success of meta-learning methods, such as MAML, can be attributed to large changes in the representation during task learning. The authors emphasize the importance of updating only the body (feature extractor) of the model and freezing the head (classifier) during the adaptation phase for effective cross-domain adaptation. Building on this insight, DAML <cit.> introduces tasks from both seen and pseudo-unseen domains during meta-training to obtain domain-agnostic initial parameters capable of adapting to novel classes in unseen domains. In <cit.>, the authors propose a transferable meta-learning algorithm with a meta task adaptation to minimize the domain divergence and thus facilitate knowledge transfer across domains. To further improve the transferability of cross-domain knowledge, <cit.> and <cit.> propose to incorporate semi-supervised techniques into the meta-learning framework. Specifically, <cit.> combines the representation power of large pre-trained language models (e.g., BERT <cit.>) with the generalization capability of prototypical networks enhanced by SMLMT <cit.> to achieve effective generalization and adaptation to tasks from new domains. In contrast, <cit.> promotes the idea of task-level self-supervision by leveraging multiple views or augmentations of tasks.
§.§ Meta-learning & personalized federated learning
Federated learning (FL) is a distributed learning paradigm where multiple clients collaborate to train a shared model while preserving data privacy by keeping their data locally stored. FedAvg <cit.> is a pioneering method that combines local stochastic gradient descent on each client with model averaging on a central server. This approach performs well when local data across clients is independent and identically distributed (IID). However, in scenarios with heterogeneous (non-IID) data distributions, regularization techniques <cit.> have been proposed to improve local learning.
Personalized federated learning (PFL) is an alternative approach that aims to develop customized models for individual clients while leveraging the collaborative nature of FL. Popular PFL methods include L2GD <cit.>, which combines local and global models, as well as multi-task learning methods like pFedMe <cit.>, Ditto <cit.>, and FedPAC <cit.>. Clustered or group-based FL approaches <cit.> learn multiple group-based global models. In contrast, meta-learning-based methods interpret PFL as a meta-learning algorithm, where personalization to a client aligns with adaptation to a task <cit.>. Notably, various combinations of MAML-type methods with FL architectures have been explored in <cit.> to find an initial shared point that performs well after personalization to each client's local dataset. Additionally, the authors of <cit.> proposed ARUBA, a meta-learning algorithm inspired by online convex optimization, which enhances the performance of FedAvg.
To summarize, there is a growing focus on addressing FL challenges in non-IID data settings. The integration of meta-learning has shown promising outcomes, leading to enhanced personalization and performance in PFL methods.
§.§ Unsupervised meta-learning with tasks construction
In meta-training, constructing tasks typically relies on labeled data. However, real-world scenarios often involve mostly, or only, unlabeled data, requiring techniques that leverage unlabeled data to learn valuable feature representations that can transfer to downstream tasks with limited labeled data. One alternative to address this is through “self-supervised learning" (also known as “unsupervised pre-training") <cit.>. This involves training a model on a large unlabeled dataset, as depicted in Figure <ref>, to capture informative features. Contrastive learning <cit.> is commonly used in this context, aiming to learn features by bringing similar examples closer together while pushing differing examples apart. The learned features can then be fine-tuned on a target task 𝒯_new with limited labeled data 𝒟_new^tr, leading to improved performance compared to training from scratch. Another promising alternative is “unsupervised meta-learning," which aims to automatically construct diverse and structured training tasks from unlabeled data. These tasks can then be used with any meta-learning algorithm, such as MAML <cit.> and ProtoNet <cit.>. In this section, we will explore methods for meta-training without predefined tasks and investigate strategies for automatically constructing tasks for meta-learning.
The method proposed in <cit.> constructs tasks based on unsupervised representation learning methods such as BiGAN <cit.> or DeepCluster <cit.> and clusters the data in the embedding space to assign pseudo-labels and construct tasks. Other methods such as UMTRA <cit.> and LASIUM <cit.> generate synthetic samples using image augmentations or pre-trained generative networks. In particular, the authors in <cit.> construct a task 𝒯_i for a 1-shot N-way classification problem by creating a support set 𝒟_i^tr and a query set 𝒟_i^ts as follows:
* Randomly sample N images and assign labels 1, …, N, storing them in 𝒟_i^tr.
* Augment[Various augmentation techniques, like flipping, cropping, or reflecting an image, typically preserve its label. Likewise, nearby image patches or adjacent video frames share similar characteristics and are therefore assigned the same label.] each image in 𝒟_i^tr, and store the resulting (augmented) images in 𝒟_i^ts.
Such augmentations can be based on domain knowledge or learned augmentation strategies like those proposed in <cit.>.
In principle, task construction techniques can be applied beyond image-based augmentation. For instance, temporal aspects can be leveraged by incorporating time-contrastive learning on videos, as demonstrated in <cit.>. Another approach is offered by Viewmaker Networks <cit.>, which learn augmentations that yield favorable outcomes not only for images but also for speech and sensor data.
Contrary to these works focusing on generating pseudo tasks, Meta-GMVAE <cit.> and Meta-SVEBM <cit.> address the problem by using variational autoencoders <cit.> and energy-based models <cit.>, respectively.
However, these methods are limited to the pseudo-labeling strategies used to create tasks, they rely on the quality of generated samples and they cannot scale to large-scale datasets.
To overcome this limitation, recent approaches have investigated the possibility of using self-supervised learning techniques to improve unsupervised meta-learning methods. In particular, in <cit.>, the relationship between contrastive learning and meta-learning is explored, demonstrating that established meta-learning methods can achieve comparable performance to contrastive learning methods, and that representations transfer similarly well to downstream tasks. Inspired by these findings, the authors in <cit.> integrate contrastive learning in a two-stage training paradigm consisting of sequential pre-training and meta-training stages. Another work <cit.> interprets a meta-learning problem as a set-level problem and maximizes the agreement between augmented sets using SimCLR <cit.>. Finally, PsCo <cit.> builds upon MoCo <cit.> by progressively improving pseudo-labeling and constructing diverse tasks in an online manner. These findings indicate the potential for leveraging existing advances in meta-learning to improve contrastive learning (and vice-versa).
To meta-learn with unlabeled text data, some methods use language modeling, as shown in <cit.> for GPT-3. Here, the support set 𝒟_i^tr consists of a sequence of characters, and the query set 𝒟_i^ts consists of the subsequent sequence of characters. However, this approach may not be suitable for text classification tasks, such as sentiment analysis or identifying political bias.
In <cit.>, an alternative approach (SMLMT) for self-supervised meta-learning for few-shot natural language classification tasks is proposed. SMLMT involves masking out words and classifying the masked word to construct tasks. The process involves: (1) sampling a subset of N unique words and assigning each word a unique ID as its class label, (2) sampling K+Q sentences that contain each of the N words and masking out the corresponding word in each sentence, and (3) constructing the support set 𝒟_i^tr and the query set 𝒟_i^ts using the masked sentences and their corresponding word IDs.
SMLMT (for unsupervised meta-learning) is compared to BERT <cit.>, a method that uses standard self-supervised learning and fine-tuning. SMLMT outperforms BERT on some tasks and achieves at least equal performance on others. Furthermore, Hybrid-SMLMT (semi-supervised meta-learning, which involves meta-learning on constructed tasks and supervised tasks), is compared to MT-BERT <cit.> (multi-task learning on supervised tasks) and LEOPARD <cit.> (an optimization-based meta-learner that uses only supervised tasks). The results show that Hybrid-SMLMT significantly outperforms these other methods.
§.§ Meta-learning & domain adaptation/generalization
Domain shift is a fundamental challenge, where the distribution of the input data changes between the training and test domains. To address this problem, there is a growing interest in utilizing meta-learning techniques for more effective domain adaptation and domain generalization. These approaches aim to enable models to quickly adapt to new domains with limited data or to train robust models that achieve better generalization on domains they have not been explicitly trained on.
§.§.§ Effective domain adaptation via meta-learning
Domain adaptation is a form of transductive transfer learning that leverages source domain(s) p_S(x,y) to achieve high performance on test data from a target domain p_T(x,y). It assumes p_S(y|x) = p_T(y|x) but p_S(x) ≠ p_T(x), treating domains as a particular kind of tasks, with a task 𝒯_i ≜{p_i(x), p_i(y|x), ℒ_i} and a domain d_i ≜{p_i(x), p(y|x), ℒ}. For example, healthcare data from different hospitals with varying imaging techniques or patient demographics can correspond to different domains. Domain adaptation is most commonly achieved via feature alignment as in <cit.> or via translation between domains using CycleGAN <cit.> as in <cit.>. Other approaches focus on aligning the feature distribution of multiple source domains with the target domain <cit.> or they address the multi-target domain adaptation scenario <cit.> by designing models capable of adapting to multiple target domains. However, these methods face limitations when dealing with insufficient labeled data in the source domain or when quick adaptation to new target domains is required. Additionally, they assume the input-output relationship (i.e., p(y|x)) is the same across domains. To solve these problems, some methods <cit.> combine meta-learning with domain adaptation. In particular, ARM <cit.> leverages contextual information extracted from batches of unlabeled data to learn a model capable of adapting to distribution shifts.
§.§.§ Effective domain generalization via meta-learning
Domain generalization enables models to perform well on new and unseen domains without requiring access to their data, as illustrated in Figure <ref>. This is particularly useful in scenarios where access to data is restricted due to real-time deployment requirements or privacy policies. For instance, an object detection model for self-driving cars trained on three types of roads may need to be deployed to a new road without any data from that domain. In contrast to domain adaptation, which requires access to (unlabeled) data from a specific target domain during training to specialize the model, domain generalization belongs to the inductive setting. Most domain generalization methods aim to train neural networks to learn domain-invariant representations that are consistent across domains. For instance, domain adversarial training <cit.> trains the network to make predictions based on features that cannot be distinguished between domains. Another approach is to directly align the representations between domains using similarity metrics, such as in <cit.>. Data augmentation techniques are also used to enhance the diversity of the training data and improve generalization across domains <cit.>.
Another way to improve generalization to various domains is to use meta-learning and applying the episodic training paradigm typical of MAML <cit.>, as in <cit.>. For instance, MLDG <cit.> optimizes a model by simulating the train-test domain shift during the meta-training phase. MetaReg <cit.> proposes to meta-learn a regularization function that improves domain generalization. DADG <cit.> contains a discriminative adversarial learning component to learn a set of general features and a meta-learning-based cross-domain validation component to further enhance the robustness of the classifier.
§.§ Meta-learning & continual learning
This section explores the application of meta-learning to continual learning, where learners continually accumulate experience over time to more rapidly acquire new knowledge or skills. Continual learning scenarios can be divided into task-incremental learning, domain-incremental learning, and class-incremental learning, depending on whether task identity is provided at test time or must be inferred by the algorithm <cit.>. In this section, we focus on approaches that specifically address task/class-incremental learning.
Traditionally, meta-learning has primarily focused on scenarios where a batch of training tasks is available. However, real-world situations often involve tasks presented sequentially, allowing for progressive leveraging of past experience. This is illustrated in Figure <ref>, and examples include tasks that progressively increase in difficulty or build upon previous knowledge, or robots learning diverse skills in changing environments.
Standard online learning involves observing tasks in a sequential manner, without any task-specific adaptation or use of past experience to accelerate adaptation.
To tackle this issue, researchers have proposed various approaches, including memory-based methods <cit.>, regularization-based methods <cit.> and dynamic architectural methods <cit.>. However, each of these methods has its own limitations, such as scalability, memory inefficiency, time complexity, or the need for task-specific parameters. Meta-learning has emerged as a promising approach for addressing continual learning. In <cit.>, the authors introduced ANML, a framework that meta-learns an activation-gating function that enables context-dependent selective activation within a deep neural network. This selective activation allows the model to focus on relevant knowledge and avoid catastrophic forgetting. Other approaches such as MER <cit.>, OML <cit.>, and LA-MAML <cit.> use gradient-based meta-learning algorithms to optimize various objectives such as gradient alignment, inner representations, or task-specific learning rates and learn update rules that avoid negative transfer.
These algorithms enable faster learning over time and enhanced proficiency in each new task.
§ OPEN CHALLENGES & OPPORTUNITIES
Meta-learning has been a promising area of research that has shown impressive results in various machine learning domains. However, there are still open challenges that need to be addressed in order to further advance the field. In this section, we discuss some of these challenges and categorize them into three main groups. Addressing these open challenges can lead to significant advances in meta-learning, which could potentially lead to more generalizable and robust machine learning models.
§.§ Addressing fundamental problem assumptions
The first category of challenges pertains to the fundamental assumptions made in meta-learning problems.
One such challenge is related to generalization to out-of-distribution tasks and long-tailed task distributions. Indeed, adaptation becomes difficult when the few-shot tasks observed at meta-test time are from a different task distribution than the ones seen during meta-training. While there have been some attempts to address this challenge, such as in <cit.>, it still remains unclear how to address it. Ideas from the domain generalization and robustness literature could provide some hints and potentially be combined with meta-learning to tackle these long-tailed task distributions and out-of-distribution tasks. For example, possible directions are to define subtle regularization techniques to prevent the meta-parameters from being very specific to the distribution of the training tasks, or use subtle task augmentation techniques to generate synthetic tasks that cover a wider range of task variations.
Another challenge in this category involves dealing with the multimodality of data. While the focus has been on meta-training over tasks from a single modality, the reality is that we may have multiple modalities of data to work with. Human beings have the advantage of being able to draw upon multiple modalities, such as visual imagery, tactile feedback, language, and social cues, to create a rich repository of knowledge and make more informed decisions. For instance, we often use language cues to aid our visual decision-making processes. Rather than developing a prior that only works for a single modality, exploring the concept of learning priors across multiple modalities of data is a fascinating area to pursue. Different modalities have different dimensionalities or units, but they can provide complementary forms of information. While some initial works in this direction have been reported, including <cit.>, there is still a long way to go in terms of capturing all of this rich prior information when learning new tasks.
§.§ Providing benchmarks and real-world problems
The second category of challenges is related to providing/improving benchmarks to better reflect real-world problems and challenges.
Meta-learning has shown promise in a diverse set of applications, including few-shot land cover classification <cit.>, few-shot dermatological disease diagnosis <cit.>, automatically providing feedback on student code <cit.>, one-shot imitation learning <cit.>, drug discovery <cit.>, motion prediction <cit.>, and language generation <cit.>, to mention but a few. However, the lack of benchmark datasets that accurately reflect real-world problems with appropriate levels of difficulty and ease of use is a significant challenge for the field. Several efforts have been made towards creating useful benchmark datasets, including Meta-Dataset <cit.>, Meta-Album Dataset <cit.>, NEVIS'22 <cit.>, Meta-World Benchmark <cit.>, Visual Task Adaptation Benchmark <cit.>, Taskonomy Dataset <cit.>, VALUE Benchmark <cit.>, and BIG Bench <cit.>. However, further work is needed to ensure that the datasets are comprehensive and representative of the diversity of real-world problems that meta-learning aims to address.
Some ways with which existing benchmarks can be improved to better reflect real-world problems and challenges in meta-learning are: (1) to increase the diversity and complexity of tasks that are included; (2) to consider more realistic task distributions that can change over time; and (3) to include real-world data that is representative of the challenges faced in real-world applications of meta-learning. For example, including medical data, financial data, time-series data, or other challenging types of data (besides images and text) can help improve the realism and relevance of benchmarks.
Furthermore, developing benchmarks that reflect these more realistic scenarios can help improve the generalization and robustness of algorithms. This ensures that algorithms are tested on a range of scenarios and that they are robust and generalizable across a wide range of tasks. Better benchmarks are essential for progress in machine learning and AI, as they challenge current algorithms to find common structures, reflect real-world problems, and have a significant impact in the real world.
§.§ Improving core algorithms
The last category of challenges in meta-learning is centered around improving the core algorithms.
One major obstacle is the large-scale bi-level optimization problem encountered in popular meta-learning methods such as MAML. The computational and memory costs of such approaches can be significant, and there is a need to make them more practical, particularly for very large-scale problems, like learning effective optimizers <cit.>.
In addition, a deeper theoretical understanding of various meta-learning methods and their performance is critical to driving progress and pushing the boundaries of the field. Such insights can inform and inspire further advancements in the field and lead to more effective and efficient algorithms.
To achieve these goals, several fundamental questions can be explored, including:
(1) Can we develop theoretical guarantees on the sample complexity and generalization performance of meta-learning algorithms? Understanding these aspects can help us design more efficient and effective meta-learning algorithms that require less data or less tasks.
(2) Can we gain a better understanding of the optimization landscape of meta-learning algorithms? For instance, can we identify the properties of the objective function that make it easier or harder to optimize? Can we design optimization algorithms that are better suited to the bi-level optimization problem inherent in various meta-learning approaches?
(3) Can we design meta-learning algorithms that can better incorporate task-specific or domain-specific expert knowledge, in a principled way, to learn more effective meta-parameters?
Addressing such questions could enhance the design and performance of meta-learning algorithms, and help us tackle increasingly complex and challenging learning problems.
§ CONCLUSION
In conclusion, the field of artificial intelligence (AI) has witnessed significant advancements in developing specialized systems for specific tasks. However, the pursuit of generality and adaptability in AI across multiple tasks remains a fundamental challenge.
Meta-learning emerges as a promising research area that seeks to bridge this gap by enabling algorithms to learn how to learn. Meta-learning algorithms offer the ability to learn from limited data, transfer knowledge across tasks and domains, and rapidly adapt to new environments. This review paper has explored various meta-learning approaches that have demonstrated promising results in applications with scarce data. Nonetheless, numerous challenges and unanswered questions persist, calling for further investigation.
A key area of focus lies in unifying various fields such as meta-learning, self-supervised learning, domain generalization, and continual learning. Integrating and collaborating across these domains can generate synergistic advancements and foster a more comprehensive approach to developing AI systems. By leveraging insights and techniques from these different areas, we can construct more versatile and adaptive algorithms capable of learning from multiple tasks, generalizing across domains, and continuously accumulating knowledge over time.
This review paper serves as a starting point for encouraging research in this direction. By examining the current state of meta-learning and illuminating the challenges and opportunities, we aim to inspire researchers to explore interdisciplinary connections and contribute to the progress of meta-learning while integrating it with other AI research fields. Through collective efforts and collaboration, we can surmount existing challenges and unlock the full potential of meta-learning to address a broad spectrum of complex problems faced by intelligent systems.
|
http://arxiv.org/abs/2307.05344v1 | 20230711153337 | Classical sampling from noisy Boson Sampling and the negative probabilities | [
"Valery Shchesnovich"
] | quant-ph | [
"quant-ph",
"math-ph",
"math.MP"
] |
theoremTheorem
corrolaryCorollary
'
#1†̣#̅1#1𝒰
Centro de Ciências Naturais e Humanas, Universidade Federal do
ABC, Santo André, SP, 09210-170 Brazil
It is known that, by accounting for the multiboson interferences up to a finite order, the output distribution of noisy Boson Sampling, with distinguishability of bosons serving as noise, can be approximately sampled from in a time polynomial in the total number of bosons. The drawback of this approach is that the joint probabilities of completely distinguishable bosons, i.e., those that do not interfere at all, have to be computed also. In trying to restore the ability to sample from the distinguishable bosons with computation of only the single-boson probabilities, one faces the following issue: the quantum probability factors in a convex-sum expression, if truncated to a finite order of multiboson interference, have, on average, a finite amount of negativity in a random interferometer. The truncated distribution does become a proper one, while allowing for sampling from it in a polynomial time, only in a vanishing domain close to the completely distinguishable bosons. Nevertheless, the conclusion that the negativity issue is inherent to all efficient classical approximations to noisy Boson Sampling may be premature. I outline the direction for a whole new program, which seem to point to a solution. However its success depends on the asymptotic behavior of the symmetric group characters, which is not known. Classical sampling from noisy Boson Sampling and the negative probabilities
V. S. Shchesnovich
August 12, 2023
================================================================================
§ INTRODUCTION
Boson Sampling model <cit.> is one of the proposals for near term quantum advantage with intermediate size quantum systems <cit.>, with the advantage that it does not involve interactions between quantum subsystems (individual bosons) with the promised quantum advantage over classical simulations coming solely from the Bose-Einstein statistics. On the experimental side, the photons are quite suitable source of non-interacting bosons and Boson Sampling with N=20 photons was recently demonstrated experimentally <cit.>, still short of the believed threshold N≈ 50 bosons <cit.> for the advantage over digital computers. Instead the focus shifted to the so-called Gaussian Boson Sampling <cit.> with the squeezed states of light at input, which admits much better scalability in experiments <cit.>. On the other hand, one has also to keep in mind the fact that realistic sources and other setup parts feature some amount of noise. Such are realistic photon sources, producing only partially indistinguishable photons, due to imperfect internal state matching or the optical path mismatch in propagation in a realistic interferometer. This and other sources of noise may severely affect the possible quantum advantage by allowing an approximate efficient classical sampling. In this respect the single-photon Boson Sampling model allows for an analytical analysis of how the quantum advantage is affected by the inevitable experimental noise due to uncontrolled partial distinguishability of photons. There is the classical limit, the completely distinguishable photons. In Ref. <cit.> it was shown that by employing a cut-off on the higher-orders of multi-photon interferences and simulating classically the resulting approximate model one can efficiently sample classically, to a small N-independent error, from such a noisy Boson Sampling. Similar approach can be employed for other noise sources, such as noise in interferometer <cit.> due to the equivalence links between different noise models <cit.>. The approach of Ref. <cit.> requires efficient computation of the joint transition probabilities of completely distinguishable bosons by employing the JSV algorithm <cit.>. However, this is a strange feature of a sampling algorithm, since completely distinguishable bosons behave as classical particles: they can be sampled from in linear time, e.g., by sending particles one by one through the interferometer.
To investigate whether one can do better that the algorithm of Ref. <cit.>, especially in dealing with the classical particles, is the main objective of the present work. One would expect, as a better algorithm, an algorithm which is polynomial in the total number of bosons for any finite value of the distinguishability parameter and which does not rely on computing the probabilities of joint transitions of the completely distinguishable bosons (classical particles). An algorithm which satisfies the second condition was presented in Ref. <cit.>, however, it cannot be a polynomial algorithm in the total number of bosons for a finite distinguishability parameter.
The text is organized as follows. In the next section, section <ref>, I summarize the appropriate description of partially distinguishable bosons <cit.>, and formulate the condition for the partial distinguishability function for the proper probability distribution. In section <ref> I recall the approach of Ref. <cit.> of simulating Boson Sampling with multi-boson interferences up to a fixed order R and the reasons why it works. In section <ref>, the quantum probability factor truncated to lower-order interference terms is shown to have some finite negativity, thus preventing direct sampling from the convex-sum expansion for probability. In section <ref>, it is shown that the approach of Ref. <cit.> produces a proper probability distribution if the distinguishability parameter satisfies x≤1/N-K for a free parameter K, at a heavy price of the sampling complexity scaling as O(K^2 4^K). Finally, in section <ref> I outline the direction for a new program, which might permit one to find proper approximating distributions for the noisy Boson Sampling distribution. Section <ref> contains a short summary of the results.
§ MODELS OF DISTINGUISHABILITY WITH PROPER PROBABILITY DISTRIBUTIONS
Boson Sampling model performs unitary transformation U of size M× M on the Fock state of N indistinguishable bosons in different input ports, say k=1,…, N to an output Fock state in the so-called no-collision regime, when all bosons end up in different output ports l_1,…, l_N (the odds to have this in a random multiport is estimated to be 1-O(N^2/M)<cit.>). The output probability is given by the product of the quantum amplitude of the transition and its complex conjugate,
p(l_1,…,l_N) = |∑_σ∏_k=1^N U_σ(k),l_k|^2
= | U[1,…,N|l_1,…,l_N]|^2,
where the amplitude is given by the matrix permanent <cit.>, i.e., the summation is over all permutations σ of N objects (a.k.a. the symmetric group S_N).
Realistic description of a physical setup (e.g., with photons) must account for distinguishability of bosons. This can be done by introducing a single function J(σ) on the symmetric group S_N, called the distinguishability function, which weights the path-dependent interferences of N bosons in a quantum amplitude and its conjugate for different permutations σ, i.e., the expression in Eq. (<ref>) generalizes <cit.> as follows
p^(J)(l_1,…,l_N) = ∑_σ_1,σ_2 J(σ_1σ_2^-1)∏_k=1^N U^*_σ_1(k),l_kU_σ_2(k),l_k.
Eq2
The distinguishability function J(σ) reflects the internal states of bosons, described by the density matrix ρ̂^(N)∈ H^⊗ N, in the tensor product of Hilbert spaces H of each boson, and is given by the trace-product with the unitary representation P̂_σ of σ in H^⊗ N:
J(σ) = tr( P̂_σρ̂^(N)).
Eq3
When the internal state is completely symmetric, i.e. P̂_σρ̂^(N) = ρ̂^(N), the bosons in the ρ̂ are completely indistinguishable <cit.>, and we get the probability as in Eq. (<ref>); the completely distinguishable bosons correspond to J(σ) = δ_σ,I, where I is the trivial permutation, with the probability in Eq. (<ref>) being equal to the matrix permanent of a positive matrix with elements |U_kl|^2.
The J-function of Eq. (<ref>) is positive definite, i.e., for an arbitrary function Z_σ on S_N we have
∑_σ_1,σ_2 J(σ_1σ_2^-1)Z_σ_1Z^*_σ_2≥ 0.
Eq4
Observe that Eq. (<ref>) presupposes that J^*(σ) = J(σ^-1) readily satisfied by J(σ) of Eq. (<ref>) due to the unitarity of the symmetric group representation, P^†_σ = P_σ^-1. The group property P̂_σ_1σ^-1_2 = P̂_σ^-1_1P̂σ_2 guarantees the positive definiteness of J in Eq. (<ref>):
tr( ∑_σ_1Z_σ_1P̂_σ_1ρ̂^(N)∑_σ_2Z^*_σ_2P̂^†_σ_2) ≥ 0,
Eq5
since the density matrix of the internal state of bosons ρ̂ is a positive definite operator in H^⊗ N. Another important property is the normalization J(I) =1 (i.e., the sum of probabilities must be equal to 1). These two properties guarantee that the probabilities in Eq. (<ref>) constitute a proper distribution. It has been shown in Ref. <cit.> that all normalized positive definite functions on the symmetric group S_N can be represented in the form of Eq. (<ref>) with some (possibly entangled) mixed state ρ̂∈ H^⊗ N of N single bosons. Thus one can on choice work either with the internal state ρ̂ or with the distinguishability function description in dealing with partially distinguishable bosons.
On the other hand performing an arbitrary approximation in the expression for the proper probability distribution may result in a non-proper one.
The model of Ref. <cit.> is obtained by considering that bosons are in some pure internal states having a uniform cross-state overlap x, or, alternatively, considering that the boson at input port k is in the following mixed state
ρ̂_k = x|ϕ_0⟩⟨ϕ_0| +(1-x) |ϕ_k⟩⟨ϕ_k|, ⟨ϕ_i|ϕ_j⟩ = δ_ij,
Eq6
in this case ρ̂^(N) = ρ̂_1⊗…⊗ρ̂_N (see also Ref. <cit.>). A non-proper approximation to the proper distribution is obtained in Ref. <cit.> by imposing a cut-off on the two sums in Eq. (<ref>), such that the minimum number of fixed points in the relative permutation (i.e., the number of 1-cycles C_1) is bounded from below: C_1(σ_1σ^-1_2) ≥ N-R for some fixed (N-independent) R. The distinguishability function J_N(σ) = x^N-C_1(σ) of the model in Eq. (<ref>) is replaced accordingly by the following function
J_R(σ) = ∑_m=N-R^N x^N-mδ_C_1(σ),m,
Eq7
which does not satisfy the positivity property of Eq. (<ref>). To see this and the underlying reason for lost positivity, let us formulate more general condition of positivity valid for all models of similar type, i.e.,
F(σ) = ∑_m=0^N a_m δ_C_1(σ),m,
Eq8
where a_m are real parameters satisfying the normalization condition F(I)=1, i.e., a_N=1. To this goal we expand δ_C_1(σ),m over the partitions of the set 1,…, N into the variable set of m fixed points ^(m) = (k_1,… k_m) and its complement (i.e., the set of derangements). Then
δ_C_1(σ),m = ∑_^(m)( ∏_k∈^(m)δ_σ(k),k)∏_k∉^(m) (1-δ_σ(k),k)
= ∑_n=m^Nnm (-1)^n-m∑_^(n)∏_k∈^(n)δ_σ(k),k,
where we have expanded the second factor and combined the fixed points into a bigger set ^(n) of size n. Now we substitute the expansion of Eq. (<ref>) into the model Eq. (<ref>) and exchange the order of summation
F(σ) = ∑_m=0^N a_m δ_C_1(σ),m
= ∑_n=0^N ∑_m=n^N nm (-1)^n-ma_m ∏_k∈^(n)δ_σ(k),k.
For the function in Eq. (<ref>) to be positive definite it is sufficient to require the coefficients at the functions j_^(n)(σ) = ∏_k∈^(n)δ_σ(k),k to be positive, since j_^(n)(σ) is a positive definite function, e.g., for ^(n) = (1,…,n)≡ [n] we have the following representation
j_[n](σ) = Tr{P̂_σ∏_k=1^n^⊗ |ϕ_k⟩⟨ϕ_k| ⊗(|ϕ_0⟩⟨ϕ_0|)^⊗ N-n},
Eq11
with ⟨ϕ_i|ϕ_j⟩=δ_ij, which is obviously positive definite. Setting the coefficients at the positive definite functions j_^(n)(σ) in Eq. (<ref>) to be
b_n ≡∑_m=0^n nm(-1)^n-ma_m
Eq12A
and inverting the summation in their definition we get the conditions for the positivity of the function F(σ) in Eq. (<ref>):
a_m = ∑_n=0^m mnb_n, b_n≥ 0.
Eq12B
Therefore, among the functions F(σ) in the form given by Eq. (<ref>) the functions
F^(+)(σ) = ∑_n=0^N b_n ∑_m=n^N mnδ_C_1(σ),m, b_n≥ 0
Eq13
are positive definite functions on S_N.
The partial distinguishability model of Ref. <cit.>, i.e., J_N(σ)=x^N-C_1(σ), can be also cast in the form of Eq. (<ref>) with b_n = x^N-n(1-x)^n (observe that 0≤ x≤ 1, see Eq. (<ref>)), which fact can be directly verified by exchanging the summation and using the binomial theorem.
The physical interpretation of the condition in Eq. (<ref>) follows from application of the expansion of Eq. (<ref>) for each term with positive coefficient in Eq. (<ref>). We obtain the relation
∑_m=n^N mnδ_C_1(σ),m = ∑_^(n)∏_k∈^(n)δ_σ(k),k = ∑_^(n) j_^(n)(σ).
Eq14
When substituted into the output probability distribution of Eq. (<ref>), the basic distinguishability function j_^(n)(σ) imposes the same permutation of bosons in the quantum amplitude and its conjugate, σ_1(k) = σ_2(k) for k∈^(n), i.e., the bosons from the inputs k∈^(n) behave as completely distinguishable bosons (i.e., as classical particles), and the output probability factorizes into a product of those for N-n indistinguishable and n distinguishable bosons.
Similarly, in the case of J-function of Ref. <cit.>, i.e., J_N(σ)=x^N-C_1(σ) the probability of Eq. (<ref>) expands as follows (exchanging n and N-n, for future convenience)
p^(J_N)(l_1… l_N) = ∑_n=0^N x^n(1-x)^N-n
×∑_^(n)∑_^(n) p^(J_N,x=1)(^(n)|^(n))
|U|^2[^(N-n)|^(N-n) ],
where (^(n),^(N-n)) and (^(n),^(N-n)) are some permutations of [N], the subset ^(n) = (l_α_1,…, l_α_n) is the set of output ports of n indistinguishable bosons (J_N=1 for x=1), and |U|^2[^(N-n)|^(N-n) ] is the classical probability (the matrix permanent of the matrix |U_kl|^2), which can be cast in the form of Eq. (<ref>) with J_0(σ) = δ_σ,I (see Ref. <cit.> for more details).
The cut-off model with J_R of Eq. (<ref>) does not satisfy the positive definiteness condition formulated in Eq. (<ref>), which fact can imply that the approximation with such a distinguishability function does allow some probabilities to be negative (note that the normalization condition J_R (I)=1 is satisfied, thus the probabilities must sum to 1). From Eqs. (<ref>) and (<ref>), by comparing with Eq. (<ref>), one can expand the probability of the cut-off model as follows
p^(J_R)(l_1… l_N) = ∑_n=0^N x^n(1-x)^N-n
×∑_^(n)∑_^(n) p^(J_R,x=1)(^(n)|^(n))
|U|^2[^(N-n)|^(N-n) ],
where p^(J_R,x=1 )(^(n)|^(n)) is given by Eq. (<ref>) with the J_R of Eq. (<ref>) for x=1 and may become negative.
Since, by the expression in Eq. (<ref>), it involves some mutually dependent diagonals of U (here the “diagonal" is a product of elements of U on distinct rows and distinct columns), such as Z_σ = ∏_k=1^n U_σ(k),l_k (with σ∈ S_n), having only up to n^2 free parameters instead of n! in an arbitrary Z_σ, we need to find out the amount of negativity in the subspace spanned by above diagonals, and not in the entire n!-dimensional vector space of Z_σ.
As N→∞, for a finite x=O(1) one can estimate the number of computations for the direct sampling from the output distribution in the form of Eq. (<ref>) by observing that the binomial distribution Nnx^n(1-x)^N-n becomes sharply concentrated in the small interval of size O(√(x(1-x)N)) centered at n̅ = xN. Thus computations of the matrix permanents of the average size xN are required for sampling at an error vanishing as O(N^-1/2). Therefore, by similar arguments as in Ref. <cit.>, asymptotically in N, the sampling complexity can be estimated to be the same as that of the ideal Boson Sampling with only xN bosons (similar observation was used in Ref. <cit.>). This observation sets the base lime for the number of computations in an approximate model, such as Eq. (<ref>).
§ APPROXIMATE CLASSICAL SIMULATION OF PARTIALLY DISTINGUISHABLE BOSONS
Let us now recall the main ideas of Ref. <cit.> for the efficient approximate classical simulation of Boson Sampling with partially distinguishable bosons. In the exposition of some of the essential steps we will follow also Ref. <cit.>. They are as follows.
(I).– One considers the total variation distance 𝒟=1/2∑ |p^(J_N)-p^(J_R)| between the distributions in Eqs. (<ref>) and (<ref>) averaged over the random multiports U chosen according to the Haar measure. For M≫ N^2, up to a small error, one can use the Gaussian approximation with independent Gaussian distribution for each U_kl<cit.> instead. In the latter case it can be shown that (see derivation in Ref. <cit.>)
⟨𝒟⟩_U < 1/2√(1 + e/(R+2)!)x^R+1/√(1- x^2).
Eq17
Having shown that the two probability distributions, one proper and one improper, are close on average, one can then use the Markov inequality in the probability to bound the total variation distance at the cost of some non-zero probability of failure <cit.>.
(II).– One show that the total amount of computations necessary for obtaining a single probability for the cut-off model scale as O(R2^R N^2R) and the small fixed error requires only R=O(1). For this goal one can use the expression in Eq. (<ref>) substituting the distinguishability function of Eq. (<ref>) expressed as a sum over the derangements J_R(σ) = ∑_n=0^R x^n ℐ(D^(N)_n), where ℐ(D^(N)_n) is the indicator on the subset D^(N)_n of permutations in S_N with N-n fixed points, i.e., the derangements of n elements. The described expansion reads (we suppress the output port indices; see also Ref. <cit.>)
p^(J_R) = ∑_n=0^R x^n𝒰(D^(N)_n),
𝒰(D^(N)_n) ≡∑_σ∈ D^(N)_n∑_τ∏_k=1^N U^*_σ[τ(k)],l_kU_τ(k),l_k.
By splitting the input N indices into the n derangements, ^(n), and N-n fixed points, ^(N-n), we can expand the expression in Eq. (<ref>) as follows
𝒰(D^(N)_n) = ∑_^(n)∑_^(n)𝒰(D^(n)_n) |U|^2(^(N-n)|^(N-n)),
Eq18A
where the summation over the partition (^(n),^(N-n)) of the output ports appears as the result of factoring the second permutation in the probability formula in Eq. (<ref>) τ = (τ_1⊗τ_2)μ with μ∈S_N/S_n⊗ S_N-n and τ_1∈ S_n and τ_2∈ S_N-n, acting on ^(n) and ^(N-n), respectively. Now, the first factor 𝒰(D^(n)_n) in Eq. (<ref>) contains only the derangements D^(n)_n⊂ S_n, in the subset of indices ^(n), and the second factor only the fixed points (a probability of N-n classical particles). The derangements do not represent any probability at all (they appear in complex conjugate pairs, since the inverse permutation to a derangements in D_n is also a derangements in the same subset D_n). They can be computed using either Ryser or Glynn algorithm (see also section <ref> below, where such computations are discussed in detail for another purpose – checking the negativity), whereas the classical probabilities are estimated to a small relative error ϵ by the probabilistic JSV algorithm <cit.> in the polynomial time in (N-n,1/ϵ).
(III).– One can sample from the approximate probability distribution Eq. (<ref>) by using the algorithm similar to that of. Ref. <cit.> if one can compute the probability with an acceptable relative error, i.e., on the order of the bound on the total variation distance. However, though the cut-off model does not satisfy the positivity constraint of Eq. (<ref>), the negativity is automatically bounded as desired. Indeed, when two distributions, one proper and one improper, are at some total variation distance ϵ, the amount of negativity (some of the negative probabilities) is bounded by 2ϵ, by the simple fact that the total if variation distance is the maximum of the difference in probability. It still remains to find out the effect of the relative error introduced by the probabilistic JSV algorithm. In Ref. <cit.> the average difference in probability ⟨ |p^(J=1) - p^(J_R)|⟩ for a given output is bounded. Since the average probability ⟨ p^(J=1)⟩ = ⟨ p^(J)⟩ in a random U is uniform, the maximum relative error in a probability, including that introduced by the JSV algorithm, becomes the absolute error on the total variation distance. Thus the maximum relative error in probability becomes the absolute error on the total variation distance (similar as for the ideal case of Boson Sampling <cit.>; this feature for partially distinguishable bosons has been discussed before in Ref. <cit.>).
For the discussion below, let us recall the key points on the averaging of the individual terms in the sum over the derangements, 𝒰(D^(N)_n), in Eq. (<ref>)
T(σ) ≡∑_τ∏_k=1^N U^*_σ[τ(k)],l_kU_τ(k),l_k,
Eq19
over the Haar-random multiport (in the Gaussian approximation for M≫ N^2; below we follow the derivation in Ref. <cit.>). We have
⟨ T(σ)⟩_U = δ_σ,IN!/M^N,
⟨ T(σ)T^*(π) ⟩_U = δ_σ,πN!/M^2Nχ(C_1(σ)),
where χ(n) = n!∑_k=0^n 1/k!. The first line in Eq. (<ref>) gives the non-zero average of a product of independent Gaussian random variables from a diagonal of U and that of U^* coinciding only for σ=I. The second line in Eq. (<ref>) follows from a result in Appendix A of Ref. <cit.> on the averaging of a product of four diagonals, two of U and two of U^*:
⟨∏_k=1^N U^*_σ[τ(k)],l_kU_τ(k),l_k U^*_σ^'[τ^'(k)],l_kU_τ^'(k),l_k⟩_U
= 2^C_1(π)/M^2Nδ_σ^',σ^-1δ_τ^',(π⊗ I)στ,
where in the second delta-symbol permutation π acts on the fixed points and I on the derangements of σ. The factor N! in Eq. (<ref>) accounts for the sum over τ∈ S_N in Eq. (<ref>), whereas χ(n) is the sum over π∈ S_C_1(σ) as follows
χ(n) = ∑_π∈ S_n 2^C_1(π) = ∫_1^∞dt e^1-tt^n.
chi
We have uncorrelated terms T(σ) in the summation in Eq. (<ref>), where only the classical term T(I) has a non-zero average. The same applies to similar terms in 𝒰(D^(n)_n) in Eq. (<ref>) (with N substituted by n), with the average being always zero. The bound in Eq. (<ref>) follows from the bound on the probability difference by its variance ⟨ X⟩≤√(⟨ X^2)⟩, where the second moment is the variance due to the same average probability. Moreover, the square root of the variance is bounded by the inverse total number N!/M^N of the probabilities in the output probability distribution (in the no-collision regime). These observations allow for derivation of the bound in Eq. (<ref>).
The approach of Ref. <cit.> leads to an approximate sampling algorithm for partially distinguishable bosons, at least in the model considered in Eq. (<ref>).
Our main goal below is investigate whether one can do any better. For instance, can one find any other expansion of the same probability, alternative to that of Eq. (<ref>) in order to implement sampling from the classical probabilities in linear time, instead of employing numerically intricate probabilistic JSV algorithm?
Such an alternative expansion will be discussed in the following section.
The model of Eq. (<ref>) is also applicable to the experimental Boson Sampling with photons (the most important reason to keep the model for further discussion). It can serve as description of the realistic optical setup, since one has infinite number of free parameters in a photon state (its spectral shape), and the experimental photons are described by mixed states. If we assume that the latter are close with probability x to a pure state |ϕ_0⟩ and have a long tail over the orthogonal complement, then in Eq. (<ref>) the orthogonal complementary states |ϕ_i⟩, one for each photon, describe N states selected at random from the infinite-dimensional subspace of the states orthogonal to |ϕ_0⟩.
§ ESTIMATING THE NEGATIVITY
Below a numerical evidence of negativity in the quantum factors in the convex sum expansion of the output probability distribution is presented. There are two types on negativity: the negativity in the approximate probability distribution, reported in Ref. <cit.>, and the negativity in a factor in the convex-sum expression in Eq. (<ref>). The latter negativity will be numerically estimated below, it prevents direct sampling from the convex-sum expression of the approximate probability and forces one to resort to the JSV algorithm for computation of the classical probabilities.
The probability factor p^(J_R,x=1 )(^(n)|^(n)) in Eq. (<ref>) has J_R of Eq. (<ref>) now for a variable number of bosons n satisfying n≥ R (for n<R the quantum probability is positive) and x=1. As discussed in section <ref>, the lack of the positive definiteness of the distinguishability function causes some of p^(J_R,x=1 )(^(n)|^(n)) to become negative. But how much negativity is there? Since this depends on the multiport matrix U projection on the negative subspace of the non-positive definite distinguishability function J_R(σ_1σ^-1_2), considered as the matrix element indexed by permutations σ_1,σ_2 as in Eqs. (<ref>) and (<ref>), one can resort to numerical simulations to estimate negativity on average in a randomly chosen multiport U.
We only have to calculate the probability for ^(n)=^(n)= (1,…,n) (since we consider a randomly chosen multiport U)
p^(J_R,x=1 ) = ∑_s=0^R 𝒰(D^(n)_s),
𝒰(D^(n)_s) = ∑_σ∈ D^(n)_s∑_τ∈ S_n∏_k=1^n U^*_σ[τ(k)],kU_τ(k),k.
Before proceeding to discuss the numerical data, let us see what one can expect by trying an analytical analysis. To this goal we can apply the averaging over
Haar-random multiport U by employing the Gaussian approximation given by Eqs. (<ref>)-(<ref>) in order to estimate the variance.
First of all, let us apply this approach to the full quantum probability p^(J=1) (using n temporarily as the total number of bosons), obtained by setting R=n in Eq. (<ref>), i.e., to a positive probability. We get the following results (derived before in Ref. <cit.>):
⟨ p^(J=1)⟩_U = ⟨∑_s=0^n 𝒰(D^(n)_s) ⟩_U = ∑_s=0^nδ_s,0n!/M^n = n!/M^nAvQp
and
⟨p^(J=1 )^2⟩_U = ⟨∑_s,t=0^n 𝒰(D^(n)_s) 𝒰(D^(n)_t)⟩_U
= n!/M^2n∑_σχ(C_1(σ))
= n!/M^2n∫_1^∞ t e^1-t∑_σt^C_1(σ)
=(n+1)(n!/M^n)^2,
where the cycle sum over the symmetric group has been used <cit.>
∑_σ∈ S_nt^C_1(σ) = (/ z)^n_z=0e^(t-1)z/1-z
to perform the integration over t. Moreover, the terms with different derangements 𝒰(D^(n)_0),…, 𝒰(D^(n)_n) are mutually uncorrelated by Eq (<ref>) (since the subsets of the respective permutations are non-overlapping), each contributing to the variance the square of the average of the classical term (with σ = I) ⟨𝒰(D^(n)_0)⟩ = n!/M^n. The above calculation illustrates the following point: in the Gaussian approximation of the Haar-random multiport U, the positive by definition probability is a sum of n+1 uncorrelated random variables with only the classical term having non-zero average (equal to the average probability) and the rest with zero average. The probability of the partially distinguishable bosons with J_R of Eq. (<ref>) is also composed of the same random variables, with, however, one important difference: they are multiplied by the respective powers of x<1, thus the variances are weighted accordingly. This observation allows to estimate the average total amount of negativity in the cut-off model, reported before in Ref. <cit.>: it is bounded by the square root of the variance of the terms subtracted from the full quantum probability, i.e. the terms with s=R+1…, n, multiplied by the respective powers of x, i.e., the same bound as in Eq. (<ref>).
Now we can return to the probability factor given in Eq. (<ref>). In this case x=1, therefore, the terms subject to the cut-off are not weighted by the powers of a small parameter. In this case it is tempting to conclude that there is a finite amount of negativity. On the other hand, if we apply the same arguments to the full quantum probability, we would get the same conclusion. The error of such conclusion lies in the assumption that uncorrelated terms contribute independently, but random variables can be uncorrelated and still dependent one on the other (e.g., in a nonlinear way: take x and x^2 for a random x with the symmetry x→ -x). Therefore, though the above arguments allow one to explain a small amount of negativity in the cut-off model due the higher powers of a small parameter x, they do not allow us to conclude on the negativity in the probability in Eq. (<ref>). One must therefore resort to numerical simulations.
To numerically compute the sums involving averaging over the derangements σ in Eq. (<ref>) we adopt the method employed for computation of the matrix permanents <cit.> (i.e., for averaging over the whole symmetric group). One way is to use the inclusion-exclusion principle when averaging over the symmetric group as in Ryser's algorithm <cit.>. Denoting by ^(t) the excluded subset of size t from [n]={1,…, n}, we have
∑_τ∈ S_n∏_k=1^n U^*_σ[τ(k)],kU_τ(k),k
= ∑_t=0^n-1 (-1)^t∑_^(t)∏_k=1^n ( ∑_l∉^(t)U^*_σ(k),l U_kl)
(for t=n the product of diagonals becomes empty). Now we need to perform the second summation over the derangements σ∈ D^(n)_s in Eq. (<ref>). To this goal we introduce a tensor function W(ξ) of a dummy variable ξ as follows:
U^*_j,l U_kl→ W_j,k,l(ξ) = {[ U^*_j,l U_kl , j k;; ξ |U_kl|^2, j=k. ].
Eq36
With this definition, the average over the derangements in Eq. (<ref>) becomes a Taylor expansion term in ξ of the respective average over the symmetric group,
∑_σ∈ D^(n)_s∏_k=1^n U^*_σ(k),kU_k,k
= 1/s!(d/dξ)^s_ξ=0∑_σ∈ S_n∏_k=1^n W_σ(k),k,l(ξ),
to which we can apply the same inclusion-exclusion method as in Eq. (<ref>). Finally, combining the two independent summations in Eqs. (<ref>) and (<ref>) and evaluating the derivative of the polynomial function in ξ by an appropriate averaging over the discrete phase ξ = e^iθ(q), where θ(q) = 2π/n+1q and q=0,…, n, we obtain the final formula for the double sum as follows
𝒰(D^(n)_s) = 1/n+1∑_q=0^n e^-iθ(q)s∑_t=0^n-1 (-1)^t∑_^(t)∑_f=0^n-1 (-1)^f ∑_^(f)
∏_k=1^n ( ∑_l∉^(t)∑_j∉^(f) W_j,k,l(e^iθ(q))).
To reduce the number of computations one can perform summation in Eq. (<ref>) of the exponent over s from the required set before summations over the inclusion-exclusion sets.
The same approach can be used to adopt Glynn's algorithm <cit.> with some advantage of using the recursive computations as in Ref. <cit.>. The above algorithm requires O(n^2 4^n) computations, where the base 4 is due to the double summation over the inclusion-exclusion sets. It allows to compute the probability distribution over the Haar-random multiport on a personal computer for small number of bosons (n≤ 12).
The results of numerical simulations are presented in Fig. <ref>, where we give the distribution of the probability factor in Eq. (<ref>) over the complex-valued matrices with independent Gaussian-distributed elements (in total 5000 matrices were used) for n=7 and R=4. Similar distributions in a random multiport was observed for other values of n and various R<n. The odds of having a negative probability in a random multiport remain bounded by ∼10% for all sets of n and R used in the simulations. Since the numerical simulations are limited to small n, thus one cannot make any definite conclusions on the negativity behavior for large numbers n∼ N≫ 1.
§ POSITIVITY OF THE CUT-OFF MODEL
In section <ref> we have two equivalent representations of the cut-off model of distinguishability, with the function J_R of Eq. (<ref>), one is given in Eqs. (<ref>) and the other in Eqs. (<ref>)-(<ref>). Let us find out whether it admits yet another form, where the derangements are incorporated into some full probabilities that, at least in the majority of cases (recall that the whole distribution is improper, it allows for negative content on the order of the cut-off error), can be positive and this fact would allow one to avoid usage of the probabilistic JSV algorithm for the classical probabilities. Obviously, one cannot break the derangement in such a rearrangement (the cycles in the independent cycle decomposition of a permutation are irreducible), thus the only hope is to break the classical term. Indeed, the latter is a convex sum, since the classical particles pass the multiport U independently. We will use the following identity for some R≤ K≤ N (recall that n≤ R)
|U|^2 (^(N-n)|^(N-n))
= N-nK-n^-1∑_^'^(K-n)∑_^'^(K-n)|U|^2(^'^(K-n)|^'^(K-n))
×|U|^2(^'^(N-K)|^'^(N-K)),
(the primed indices give partitions of the corresponding unprimed ones) which is obtained by expanding the matrix permanent twice, on the rows and on the columns. The inverse binomial compensates for the double counting in choosing a subset of K-n primed indices from the total N-n unprimed ones twice, once for the rows and another time for the columns (the expansion of a matrix permanent either over the rows or over the columns involves only one such choice). Inserting the identity of Eq. (<ref>) into Eq. (<ref>) and the result into the probability of Eq. (<ref>), identifying a factor similar to that in Eq. (<ref>) but now for K bosons, one obtains the rearranged form for the probability as follows
p^(J_R) = ∑_^(K)∑_^(K) p^(J^(K)_R)(^(K)|^(K))
×|U|^2(^(N-K)|^(N-K)),
where (^(K),^(N-K)) is a partition of N input and (^(K),^(N-K)) of N output indices (omitted in p^(J_R)) and the first factor reads
p^(J^(K)_R)(^(K)|^(K)) = ∑_n=0^RN-nK-n^-1 x^n 𝒰(D^(K)_n)
Eq24
with 𝒰(D^(K)_n) still being defined by Eq. (<ref>) now for K bosons instead of N.
The inverse binomial accounts for the double counting in the two-stage choosing of n from N indices: first we choose K indices from N and then n from K, thus overcounting by N-nK-n choices of K-n indices from the remaining N-n indices, which are complementary to chosen n indices:
Nn = NKKn/N-nK-n.
Eq25
The physical meaning of the rearrangement we have performed is as follows. For each 0≤ n≤ R we have split the set of N-n classical particles into two subsets of K-n and N-K particles, the former is used to obtain an expression which can be interpreted at formally as a probability of K partially distinguishable bosons, Eq. (<ref>). The probability factor in Eq. (<ref>) is not generally positive due to the inverse binomial factor. Observe that the positive-definite physical model of partially distinguishable bosons, obtained by setting K=N in Eq. (<ref>)-(<ref>),
p^(J_R) = ∑_n=0^N x^n 𝒰(D^(N)_n),
Eq26
has no such factor. A condition on the parameter x is required to restore the positive-definiteness. This can be found by application of the condition in Eq. (<ref>) to the distinguishability function in Eq. (<ref>), in this case for the total of K bosons (which is a free parameter, apart from the condition K≥ R). The distinguishability function J^(K)_R of Eq. (<ref>) reads (setting m=K-n in Eq. (<ref>))
J^(K)_R(σ) = ∑_m=K-R^K x^K-m/N-K+mmδ_C_1(σ),m,
a_m = {[ 0, 0≤ m <K-R;; x^K-m/N-K+mm, K-R ≤ m≤ K, ].
where a_m are the coefficients in the respective F(σ) from Eq. (<ref>).
For positive definiteness of J^(K)_R we have to ensure non-negative coefficients b_K-R,…, b_K, where
b_n ≡∑_m=K-R^n nm (-1)^n-mx^K-m/N-K+mmEq28
(observe that b_1=… b_K-R-1=0 due to a_m=0 for m≤ K-R-1). Eq. (<ref>) imposes a very strict condition on possible values of the distinguishability parameter x. Using the rising factorial notation n^(m) = (n(n+1)… (n+m-1) we have
b_n = n! x^K-n∑_m=K-R^n(-x)^n-m/(n-m)!1/ (N-K+1)^(m)
= n! x^K-n∑_s=0^n-K+R(-x)^s/s!1/ (N-K+1)^(n-s).
For n ≪√(N-K) we can approximate the rising factorial as follows
(N-K+1)^(n-s) = (N-K)^n-s∏_l=1^n-s( 1 + l/N-K)
= (N-K)^n-s[1+ O((n-s)^2/N-K)].
The condition can be always satisfied for R = O(1) and large N by demanding that R≪√(N-K) (recall that, as discussed in section <ref>, a finite R = O(1) is required for a small error in the total variation distance, whereas K≥ R is still free parameter). Substituting the approximation of Eq. (<ref>) into Eq. (<ref>) we obtain
b_n = n!x^K-n/(N-K)^n∑_s=0^n-K+R(-[N-K]x)^s/s!
× [1+ O(R^2/N-K)],
The exponential sum in Eq. (<ref>) has variable upper limit n-K+R and must be always non-negative. Hence, the higher powers of (N-K)x must contribute only a small correction to the sum over the lower powers. Therefore the distinguishability function J^(K)_R(σ) in Eq. (<ref>) becomes positive definite for the distinguishability parameter
x ≤1/N-K.
Eq32
If one wants to avoid using the JSV algorithm, then at most K = O(ln N) can be used in Eq. (<ref>) for efficient computation of the probability in Eq. (<ref>) bypassing the use of the JSV algorithm, which requires an exponential in K number of computations for K particles (by the algorithm of section <ref> the number of computations becomes O(K^24^K)). Then Eq. (<ref>) becomes too restrictive on the distinguishability parameter x, as it does not apply to a finite x as N scales up. Moreover, the asymptotically average total number of bosons n̅= xN, as discussed in section <ref>, becomes bounded as N→∞ for x from Eq. (<ref>). Therefore, this approach fails to produce any advantage.
§ PARTIAL DISTINGUISHABILITY AND MATRIX IMMANANTS
The purpose of this section is to show that negativity can be avoided in approximations to the noisy Boson Sampling distribution. However, the new approach demands development of the asymptotic character theory for the symmetric group S_N as N→∞ (e.g., an explicit expression for the character table valid to a vanishing error as N→∞).
The distinguishability function J(σ) = x^N-C_1(σ) besides being experimentally relevant, as discussed at the end of section <ref>, happens to be a class function, i.e., it satisfies the property
F(τστ^-1) = F(σ),
Eq43
for all σ, τ∈ S_N, since the number of fixed points remains invariant under the similarity transform (the conjugation in S_N) C_1(τστ^-1) = C_1(σ).
This fact allows one to expand such J(σ) over the irreducible characters χ_j(σ) of S_N, which are themselves some positive-definite (though not normalized) functions on S_N (see, for instance, Ref. <cit.>). This approach is not equivalent to the standard application of the Schur-Weyl dyality to the action of the group of unitary transformations on the tensor product of the single-boson Hilbert spaces, employed for multi-boson interference of partially distinguishable bosons in Refs. <cit.>, as here the character theory is applied to the distinguishability function as an element of the linear space of class functions on S_N.
We will use the following facts (see e.g., Ref. <cit.>). First, a character is a trace of a representation of the group, a positive-definite function of a conjugacy class of σ i.e., all permutations of the form τστ^-1 for τ∈ S_N. Second, an arbitrary character of the group can be written as a convex sum (precisely, with some non-negative integer coefficients) over the orthogonal basis of the irreducible characters χ_j(σ), where there are so many irreducible characters as the conjugacy classes in S_N.
Let us derive the expansion of the distinguishability function J(σ) = x^N-C_1(σ) as a (convex) sum of the irreducible characters of the symmetric group S_N. To this goal, recall that our J(σ) has the form of Eq. (<ref>), where ρ̂^(N)= ρ̂_1⊗…⊗ρ̂_N with ρ̂_k from Eq. (<ref>). Therefore, it can be also cast as follows
J(σ) = ∑_n=0^Nx^n(1-x)^N-nTr__n{P̂_σ} ,
Eq44
where we have introduced N+1 orthogonal linear subspaces in H^⊗ N, where subspace _n is some linear span of Nn orthogonal
states generated by the action of the permutations μ∈S_N/S_n⊗ S_N-n. In other words, μ is the unique factor in the decomposition σ = (σ_n⊗σ_N-n)μ where μ selects the first n elements of N and (σ_n,σ_N-n) permutes inside the two subsets of sizes n and N-n. Precisely, subspace _n is generated by the unitary operators P̂_μ acting on the base state |Ψ_n⟩≡ |0⟩^⊗ n⊗ |1⟩⊗…⊗ |N-n⟩ composed of some orthogonal states |i⟩∈ H: ⟨ i|j⟩ = δ_ij, i,j = 0,1,… N. We have
_n ≡Span{|μ,n⟩ ; ∀μ∈S_N/S_n⊗ S_N-n},
|μ,n⟩≡P̂_μ|0⟩^⊗ n⊗ |1⟩⊗…⊗ |N-n⟩.
Taking the trace over the subspace _n is equivalent to performing summation of the average values of the operators P̂_μ^-1σμ on the base state in _n, i.e., the trace is the projection on the permutations μ^-1σμ with at least N-n fixed points: n+1,…, N (otherwise we get zero). Denoting ^(N-n)=(μ(n+1),…, μ(N)) and the summation over μ by that over ^(N-n) we get by using the relation in Eq. (<ref>):
Tr__n{P̂_σ} = ∑_^(N-n)∏_k∈^(N-n)δ_σ(k),k
= ∑_m=N-n^NmN-nδ_C_1(σ),m.
The above trace is of the matrix M_μ,μ^'(σ) =⟨Ψ_n| P̂^†_μP̂_σP̂_μ^'|Ψ_n⟩, i.e., the matrix of a linear representation of the symmetric group in _n, therefore, according to the general theory of group characters, the function in Eq. (<ref>) is the corresponding character of the symmetric group (generally reducible). By this fact, we must have
Tr__n{P̂_σ} = ∑_jℳ_n,jχ_j(σ), ℳ_n,j∈𝒩_0.
Eq47
where integer ℳ_nj counts the number of irreducible representations with the character χ_j in the decomposition of the representation in the subspace _n. Two irreducible characters are well-known: the trivial one χ_1(σ) =1 (J-function for the indistinguishable bosons) and the sign character χ_2(σ) = sgn(σ) (that for the indistinguishable fermions), whereas all other correspond to the so-called matrix immanants <cit.> .
The above suggest the main idea: to consider the expansion resulting from Eq. (<ref>) and (<ref>)
J(σ) = ∑_jχ_j(σ)∑_n=0^N ℳ_n,j x^n(1-x)^N-n
= ∑_j q_j(x) χ_j(σ)/χ_j(I),
i.e., a convex sum (0≤ q_j(x)≤ 1, ∑_j q_j(x) = 1) over the positive-definite normalized irreducible characters, satisfying all the properties of a distinguishability function, as discussed in section <ref>, generalizing the concept of quantum particles beyond bosons and fermions. The behavior of the coefficients q_j(x) as functions of the distinguishability parameter x could then suggest an approximation which would result in a small total variation distance error between the distributions and, at the same time, retain the positive-definiteness property of the proper distinguishability function.
The outcome of the above idea is far from clear from the outset, since even the distinguishability function of classical particles J_class(σ) = δ_σ,I can be expanded as in Eq. (<ref>) over all the irreducible characters, including completely indistinguishable bosons (χ_1(σ)), though one can sample from the classical particles linearly in the total number of them. On the other hand, asymptotically, as N→∞, it may happen that the contribution from the individual characters becomes exponentially small, due to large number of classes in the symmetric group S_N, coinciding with the partition function P(N)∼ e^π√(2N/3).
The above program necessitates knowledge of the asymptotic values of the irreducible characters of the symmetric group S_N as N→∞, i.e., a workable formula instead of an algorithm for the asymptotic character tables (their values on the conjugacy classes). Moreover, it necessitates the asymptotic complexity of all the matrix immanants, the subject under intensive investigation (for a recent review, see Ref. <cit.>). Given these two mathematical problems solved, one could use the results in the search for a better classical algorithm for noisy Boson Sampling, where uncontrollable partial distinguishability of bosons serves as noise. There are also equivalence relations between different models of noise in Boson Sampling <cit.>, which could allow general conclusions on the effect of noise.
§ CONCLUSION
The main goal of the present work has been to find a better algorithm for sampling from a noisy Boson Sampling distribution, with the partial distinguishability of bosons serving as noise, which does not require to compute the probabilities of multiple transitions of completely distinguishable bosons. All attempts to find such a better algorithm seem to be faced with the negativity in the approximating distribution, if the latter is obtained by imposing a cut-off on the order of multi-boson interferences. The negativity in the approximating distribution stems from the lost positive-definiteness of the respective partial distinguishability function. Numerical evidence points on a finite amount of negativity, at least for small numbers of bosons, accessible to numerical simulations. Rewriting the approximate (i.e., truncated) probability distribution for N bosons in total in another, physically more clear, form, as a convex sum expansion of terms corresponding each to a smaller total number of bosons K, which would include as a subset the bosons participating in the lower-order interferences accounted for by the approximation, does not help as in this new form each term with K bosons may still be a non-positive probability, for the distinguishability parameter satisfying x≤1/N-K, whereas the computational complexity scaling as O(K^24^K) by either the modified Ryser or Glynn algorithms. This results in a too narrow region of the distinguishability parameter x, vanishing as (N- O(ln N))^-1, if we aim to have an algorithm for approximate sampling asymptotically polynomial in N.
The judgement that there must always be negativity in the approximate efficient classical sampling from a noisy Boson Sampling is, nevertheless, premature. A better sampling algorithm, devoid of the negative probabilities and, hence, of computations of the probabilities of joint transitions of completely distinguishable bosons, might still be possible. I have outlined the direction for a new program which would supply only the proper approximate distributions close to that of noisy Boson Sampling. To pursue this program, however, the asymptotic character theory of the symmetric group has to be developed first, which sound as a project in its own right. Moreover, the full picture of the asymptotic computational complexity of the so-called matrix immanants is required for estimating the computational complexity of the approximating distributions.
§ ACKNOWLEDGEMENTS
This work was supported by the National Council for Scientific and Technological Development (CNPq) of Brazil, Grant 307813/2019-3.
99AA S. Aaronson and A. Arkhipov, Theory of Computing 9, 143 (2013).
QSup A. W. Harrow and A. Montanaro, Nature 549, 203 (2017).
20Ph60M H. Wang, J. Qin, X. Ding, M.-C. Chen, S. Chen, X. You, Y.-M. He, X. Jiang, L. You, Z. Wang, C. Schneider, J. J. Renema, S. Höfling, C.-Y. Lu, J.-W. Pan, Phys. Rev. Lett. 123, 250503 (2019).
QSBS A. Neville, C. Sparrow, R. Clifford, E. Johnston, P. M. Birchall, A. Montanaro, A. Laing,
Nature Physics 13, 1153 (2017).
Cliffords P. Clifford, and R. Clifford, arXiv:1706.01260 (2017).
GBS1 A. P. Lund, A. Laing, S. Rahimi-Keshari, T. Rudolph, J. L. O'Brien, and T. C. Ralph, Phys. Rev. Lett. 113, 100502 (2014).
GBS2 C. S. Hamilton, R. Kruse, L. Sansoni, S. Barkhofen, C. Silberhorn, and I. Jex, Phys. Rev. Lett. 119, 170501 (2017).
ExpGBS2 H.-S. Zhong et al, Quantum computational advantage using photons, Science 370, 1460 (2020).
R1 J. J. Renema, A. Menssen, W. R. Clements, G. Triginer, W. S. Kolthammer, and I. A. Walmsley,
Phys. Rev. Lett. 120, 220502 (2018).
LP A. Leverrier and R. García-Patrón, Quant. Inf. & Computation 15, 0489 (2015).
KK G. Kalai and G. Kindler, arXiv:1409.3093 [quant-ph].
Arkh A. Arkhipov, Phys. Rev. A 92, 062326 (2015).
JSV M. Jerrum, A. Sinclair, and E. Vigoda, Journal of the ACM 51, 671 (2004).
Moy A. E. Moylett, R. García-Patrón, J. J. Renema, and P. S. Turner. Quantum Sci. Technol., 5, 015001 (2020).
VS2014 V. S. Shchesnovich, Phys. Rev. A 89, 022333 (2014).
PartDist V. S. Shchesnovich, Phys. Rev. A 91, 013844 (2015).
Minc H. Minc, Permanents, Encyclopedia of Mathematics and Its Applications, Vol. 6 (Addison-Wesley Publ. Co., Reading, Mass., 1978).
PRL2016 V. S. Shchesnovich, Phys. Rev. Lett. 116, 123601 (2016).
VS2019 V. S. Shchesnovich, Phys. Rev. A 100, 012340 (2019).
Ryser H. Ryser, Combinatorial Mathematics, Carus Mathematical Monograph No. 14. (Wiley, 1963).
Glynn D. G. Glynn, Eur. J. of Combinatorics 31, 1887 (2010).
VS2015 V. S. Shchesnovich, Phys. Rev. A 91, 063842 (2015).
Brod S. Aaronson and D. J. Brod, Phys. Rev. A 93, 012335 (2016).
Stanley R. P. Stanley, Enumerative Combinatorics, 2nd ed., Vol. 1 (Cambridge University Press, 2011).
Weyl H. Weyl, The Classical Groups: Their Invariants and Representations (Princeton University Press; 2nd ed. 1997).
imm M. Tillmann, S.-H. Tan, S. E. Stoeckl, B. C. Sanders, H. de Guise, R. Heilmann, S. Nolte, A. Szameit, and P. Walther,
Phys. Rev. X 5, 041015 (2015).
CircMod A. E. Moylett and P. S. Turner, Phys. Rev. A 97, 062329 (2018).
MLA R. Merris, Multilinear algebra (CRC Press, 1997).
immCo1 R. Curticapean, arXiv:2102.04340 [cs.CC].
One can think of the characters as some equivalents of density matrices in the linear space of all class functions on S_N, since characters are positive-definite functions in the sense of definition in Eq. (<ref>) and we known from Ref. <cit.> that any such function can be cast in the form of Eq. (<ref>). To formally build this equivalent view, we introduce the orthogonal basis |σ⟩, σ∈ S_N. Then an irreducible character becomes a pure state with the expansion χ_j(σ) = ⟨σ|j⟩. The scalar product in the linear space of the class functions, written for two characters for f(σ) and g(σ) becomes
⟨ f|g⟩ = 1/N!∑_σf^*(σ)g(σ).
Eq48
(when χ_j(σ_1σ^-1_2) is used as the distinguishability function in Eq. (<ref>) the expression for the corresponding probability becomes the absolute value of the matrix immanant averaged over the permutations, the later averaging restores the symmetry of the probability for identical bosons)
|
http://arxiv.org/abs/2307.03947v1 | 20230708101048 | Hyperelliptic Gorenstein curves and logarithmic differentials | [
"Luca Battistella",
"Sebastian Bozlee"
] | math.AG | [
"math.AG",
"math.GT",
"14H20 (Primary) 14H10 (Secondary)"
] |
1.2
#1|_#1
commutative diagrams/.cd,
arrow style=tikz,
diagrams=>=stealth
definition
innercustomthmTheorem
tocline#1#2#3#4#5#6#7#1>@̧tocdepth
secpenalty#2
M
ifempty#4
tempdimar@tocindent#1
tempdima#4
@ #3tempdimapnumwidth plus4em -pnumwidth
#5-tempdima
#1
1em 2em 3em
#6topnumwidthtocpagenum#7
[1]
@cev#1
calc
fadings
decorations.pathmorphing
decorations.pathreplacing
shapes
marginnote
#1marginnote[][]#1
#1#1
OT1pzcmit
definition
theoremTheorem[section]
*theorem*Theorem
claim[theorem]Claim
conjecture[theorem]Conjecture
*conjecture*Conjecture
corollary[theorem]Corollary
lemma[theorem]Lemma
proposition[theorem]Proposition
remark[theorem]Remark
assumption[theorem]Assumption
*runningexample*Running example
aside[theorem]Aside
*aside*Aside
condition[theorem]Condition
construction[theorem]Construction
convention[theorem]Convention
definition[theorem]Definition
example[theorem]Example
exerciseExercise
notation[theorem]Notation
proposition-definition[theorem]Proposition-Definition
question[theorem]Question
setting[theorem]Setting
theorem
innercustomconjConjecture
theorem
innercustomcorCorollary
|
http://arxiv.org/abs/2307.03985v1 | 20230708142437 | Spectroscopic Devices for Asteroseismology With Small Telescopes in NARIT | [
"Somsawat Rattanasoon",
"Eugene Semenko",
"David Mkrtichian",
"Saran Poshyachinda"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.IM"
] |
National Astronomical Research Institute of Thailand (Public Organization)
260 Moo 4, T. Donkaew, A. Maerim, Chiangmai, 50180 Thailand
Spectroscopic Devices for Asteroseismology With Small Telescopes in NARIT
Saran Poshyachinda
=========================================================================
National Astronomical Research Institute of Thailand (NARIT) has a manifold network of small telescopes installed worldwide. These telescopes serve educational and research purposes and are equipped mainly with CCD detectors for direct imaging and photometry. To extend the possible field of applications, several telescopes were fitted with commercially available medium-resolution spectrographs eShel from Shelyak. With these devices, researchers in NARIT obtained a versatile tool for stellar spectroscopy. Here we describe the current status of available equipment, possible ways of upgrading, and briefly introduce the achieved results of the asteroseismologic study of fast-rotating stars.
§ MOTIVATION
A fibre-fed medium-resolution echelle spectrograph eShel has been designed and distributed for small telescopes by Shelyak Instruments (France) since 2008 <cit.>. A typical device consists of a stationary spectrograph block linked by a fibre with 50 μm core to the Fibre Injection and Guiding Unit (FIGU) installed at the telescope side. FIGU is also connected through a 200-μm fibre channel to the Calibration Unit comprising halogen, LED, and ThAr lamps. Spectrograph and its components are commercially available on the company's website <https://www.shelyak.com/>.
Earlier models of eShel registered spectra within the wavelength range 430–700 nm with the resolution R > 10,000. In 2018, after the upgrade, which affected many components of eShel, the working range was significantly extended.
NARIT has a distributed network of small telescopes with apertures up to 1 m. For the spectroscopy of relatively bright stars, these telescopes can optionally be equipped with eShel. At the moment, NARIT has three devices with serial numbers 6H-115 (2010), 6H-128 (2016), and 6H-171 (2018). All spectrographs were acquired in their original complete set, thus having limited capabilities. To enable observations of fainter objects and to increase sensitivity in the blue part of the spectrum, we initiated a substantial upgrade of a device with SN 6H-171.
§ MODIFICATION AND TESTS
The improved device received a new high-OH fibre with enhanced throughput in the blue part of the spectrum, a new doublet collimator (Shelyak provided both components), a new imaging lens, and a professional-grade CCD. All components, except fibre, are shown in Fig. <ref>.
As a detector, we use a water-cooled Andor iKon-L system based on a 2048×2048 pixels CCD array with 13.5 μm pixel pitch. To match the plate scale to the increased pixel size, among several lenses with comparable focus lengths available in the market, we choose a commercial lens Sony FE 135 mm F1.8 GM, primarily due to its outstanding optical quality. Subsequent testing of the whole assembly also showed excellent transmission of the selected lens within the required range of wavelengths. The imaging lens is attached to the CCD camera through a specially designed adapter with an enclosed shutter.
Technical parameters of the original and upgraded versions of eShel are summarized in Table <ref>.
An upgraded variant of the spectrograph was installed for tests in a spectrograph room of the Thai National Observatory (TNO) at Doi Inthanon (Chiang Mai, Thailand) in a temperature-controlled environment. The FIGU was mounted to the left Nasmyth port of the 1-m telescope of TNO. Tests were performed in December 2022 and January 2023 under affordable weather conditions and were aimed at the verification of the optical performance of the assembly. Observational data include a standard set of calibrations (bias, flat, ThAr) and spectra of the selected stars and daytime sky. Two-dimensional raw FITS images were reduced using the pipeline PyYAP (<http://github.com/ich-heisse-eugene/PyYAP>), specially adapted to a new device.
§ RESULTS
Test images taken with the upgraded device showed remarkable aberrations arising from the misaligned optical elements of the spectrograph. As this problem appeared in the direction perpendicular to dispersion, it influenced the overall throughput and the level of scattered light. Still, it didn't affect the spectral resolution and transmission of the device. Thus we leave the evaluation of the total throughput and stability for future works and concentrate here primarily on studying these unaffected characteristics.
§.§ Transmission
Analysis of observational data revealed significantly improved spectrum quality due to better control of aberrations and enhanced transmission in the Sony lens. In the images, the point spread function remains nearly stable across the field of view in the 380-850 nm wavelength range. As a result, the shortwave limit of the working spectral range has been extended by 70 nm, from 450 nm to 380 nm. In the infrared, the working range of the current setup is limited by 900 nm. In Fig. <ref>, we show four samples of the observed spectrum of the daytime sky.
§.§ Resolving power
The resolving power of the modified eShel was evaluated by fitting the Gaussian function to the emission lines of the ThAr spectrum. This procedure is implemented as a standard step of processing in PyYAP.
Inspection of the ThAr spectra showed that the focus of the imaging camera remained stable during all observational nights. Within the spectrograph's working wavelength range, the resolving power R = λ/Δλ varied from 10,000 to 12,500, with the median R = 11,700 evaluated from 355 lines in a single image. The resolving power does not variate significantly between nights: the full width at half maximum (FHWM) of the mean ThAr line equals 3.7 pixels, close to the optimal sampling.
§ SCIENTIFIC APPLICATION
A medium-resolution fibre-fed spectrograph, in combination with a 1-meter class telescope, can be a powerful instrument for the spectroscopy of relatively bright sources. Literature has many examples of using eShel in stellar physics and the physics of the Solar system objects. Due to its compact design and high positional stability, this spectrograph appears even in the observations of the extrasolar planets. The thing is that the accuracy of the radial velocity measurements reported in <cit.>, <cit.>, and <cit.> was better than 100 m s^-1 for the stars brighter than 11 magnitudes and exposure time under one hour. Such characteristics enable the detection and observation of hot Jupiters around the brightest stars. <cit.> also gave the example of how to use eShel for observation of pulsating stars (cepheids).
The proposed upgrade opens new perspectives for the family of small telescopes in NARIT, as we have several spectrographs which, after the improvement, can be installed at any of our telescopes. In this way, it becomes possible to move part of scientific proposals aimed at studying exoplanets, active solar-like stars, binary and multiple stars from the main 2.4-m Thai National Telescope to smaller instruments without losing the efficiency and observing time. However, the main stimulus which led us to this technical work was the capability of using this device for asteroseismology of the brightest fast-rotating pulsating stars.
To demonstrate the efficiency of eShel in asteroseismological observations, in Fig. <ref>, we show an example of non-radial pulsations discovered in a 4-magnitude fast-rotating star. A typical pattern of waves propagating across the averaged spectral profile is in the left panel of <ref>. The right panel shows the 2D periodogram used for the identification of frequencies of pulsation. In this example, the star has been observed continuously with short exposures for more than five hours with the original version of eShel and the 1-m telescope of NARIT. The upgraded version of the spectrograph will allow us to increase the signal-to-noise ratio (SNR) of observational data and, thus, expand the number of potential targets or increase the temporal resolution of data with shorter exposure time preserving the same level of SNR.
§.§.§ ORCID identifiers of the authors
0000-0002-1912-1342Eugene Semenko
0000-0001-5094-3910David Mkrtichian
§.§.§ Author contributions
SR, ES, and DM are responsible for formulating the project, its technical implementation, and carrying out the observations. ES and DM are responsible for data reduction and analysis. SP contributed to the project administration. All authors equally contributed to the text of the article.
§.§.§ Conflicts of interest
The authors declare no conflict of interest.
bullsrsl-en
|
http://arxiv.org/abs/2307.04398v1 | 20230710075950 | The tt-geometry of permutation modules. Part II: Twisted cohomology | [
"Paul Balmer",
"Martin Gallauer"
] | math.RT | [
"math.RT",
"math.CT"
] |
TTG-Perm II: twisted cohomology]The tt-geometry of permutation modules.
Part II: Twisted cohomology
We continue our analysis of the derived category of permutation modules over a finite group.
In https://arxiv.org/abs/2210.08311Part I, we identified the spectrum of this tensor-triangulated category as a set.
Here we describe its topology, by reducing the problem to elementary abelian groups and then using a twisted form of cohomology to realize the spectrum as a Dirac scheme.
[2020]20C20; 14C15, 18F99, 18G90
First-named author supported by NSF grant DMS-2153758. Second-named author partially supported by the Max-Planck Institute for Mathematics in Bonn.
The authors thank the Hausdorff Institute for Mathematics in Bonn for its hospitality during the 2022 Trimester on “Spectral Methods in Algebra, Geometry, and Topology”.
[
Mohamed Ayadi
July 8, 2023
=================
--
§ INTRODUCTION
Unless mentioned otherwise, G stands for a finite group and for a field of positive characteristic p, with p typically dividing the order of G.
§.§ Executive summary
We study the homotopy category of bounded complexes of permutation -modules, idempotent-completed:
(G):=((G;))^♮ = ((G;)^♮).
This (G) is a tensor-triangulated category (`tt-category' for short).
We determined all the points of its tt-spectrum in Part I of this series; see <cit.>. The present paper aims at elucidating its topology.
This knowledge will give us, among other things, the classification of thick ⊗-ideals in (G).
Our main results are the following.
On the one hand, we show that the space is a colimit of over a suitable category of elementary abelian p-groups E that appear as subquotients of G.
On the other hand, when E is elementary abelian, we describe the spectrum as a `Dirac scheme' in the sense of Hesselholt-Pstrągowski <cit.>.
Combining these results yields a description of the topological space for all G.
Let us now explain these ideas.
§.§ The colimit theorem
To discuss the tt-geometry of (G), it is instructive to keep in mind the bounded derived category of finitely generated -modules, (), which is a localization of our (G) by <cit.>.
A theorem of Serre <cit.>, famously expanded by Quillen <cit.>, implies that (()) is the colimit of the (( E)), for E running through the elementary abelian p-subgroups of G; see <cit.>. The indexing category for this colimit is an orbit category: Its morphisms keep track of conjugations and inclusions of subgroups.
In Part I, we proved that is set-theoretically partitioned into spectra of derived categories ( ()) for certain subquotients of G, namely the Weyl groups =(N_G K)/K of p-subgroups K≤ G.
It is then natural to expect a more intricate analogue of Quillen's result for the tt-category (G), in which subgroups are replaced by subquotients.
This is precisely what we prove.
The orbit category has to be replaced by a category whose objects are elementary abelian p-sections E=H/K, for p-subgroups K H≤ G.
The morphisms in keep track of conjugations, inclusions and quotients.
See <Ref>.
This allows us to formulate our reduction to elementary abelian groups:
[<Ref>]
There is a canonical homeomorphism
_E∈ ((E)).
The category has been considered before, in Bouc-Thévenaz <cit.>.
Every morphism in is the composite of three special morphisms (<Ref>)
E! E' → E”≃ E”'
where E=E'/N is a quotient of E' (sic!), where E'≤ E” is a subgroup of E” and where E”' is a G-conjugate of E”.
The tt-category (E) is contravariant in E∈ and the tt-functors corresponding to (<ref>)
@C=2em(E”') [r]^-≃ (E”) [r]^- (E') [r]^-Ψ^N (E)
yield the standard conjugation isomorphism, the standard restriction functor, and the less standard modular N-fixed-points functor Ψ^N introduced in Part I.
The latter is a type of Brauer quotient that makes sense on the homotopy category of permutation modules.
Such functors Ψ^N do not exist on derived or stable categories and they distinguish our results and their proofs from the classical theory.
It is an open question whether they will also play a role in the generality of <cit.>.
§.§ Twisted cohomology
The above discussion reduces the analysis of to the case of elementary abelian p-groups E.
As often in modular representation theory, this case is far from trivial and can be viewed as the heart of the matter.
So let E be an elementary abelian p-group.
Our methods will rely on ⊗-invertible objects u_N in (E) indexed by the set (E)=N E[E:N]=p of maximal subgroups.
These objects are of the form u_N=(0→(E/N)→(E/N)→→ 0) for p odd and u_N=(0→(E/N)→→ 0) for p=2. See <Ref>.
We use these ⊗-invertibles u_N to construct a multi-graded ring
(E) = ⊕_s∈ ⊕_q∈^(E)_(E)(,(q)[s]),
where (q) is the ⊗-invertible ⊗_N∈(E) u_Nq(N) for every tuple , that we refer to as a `twist'.
Without these twists we would obtain the standard -graded endomorphism ring ^():=⊕_s∈(,[s]) of which, for ( E), is identified with the cohomology ^(E,k), but for (E) is reduced to the field and therefore rather uninteresting.
We call (E) the (permutation) twisted cohomology of E.
Some readers may appreciate the analogy with cohomology twisted by line bundles in algebraic geometry, or with Tate twists in motivic cohomology.
We can employ this multi-graded ring (E) to describe :
[<Ref>]
The space ((E)) identifies with an open subspace of the homogeneous spectrum of (E) via a canonical `comparison map'.
The comparison map in question generalizes the one of <cit.>, which landed in the homogenous spectrum of ^() without twist.
We also describe in <Ref> the open image of this map by explicit equations in (E).
§.§ Dirac geometry
If the reader is puzzled by the multi-graded ring (E), here is another approach based on a special open cover {H}_H≤ E of indexed by the subgroups of E and introduced in <Ref>. Its key property is that over each open H all the ⊗-invertible objects u_N are trivial: (u_N)H≃[s] for some shift s∈ depending on H and N. For the trivial subgroup , the open 1 is the `cohomological open' of Part I, that corresponds to the image under (-) of the localization (E)( E). See <Ref>.
At the other end, for H=E, we show in <Ref> that the open E is the `geometric open' that corresponds to the localization of (E) given by the geometric fixed-points functor. Compare <cit.>. For E cyclic, these two opens 1 and E are all there is to consider. But as the p-rank of E grows, there is an exponentially larger collection {H}_H≤ E of open subsets interpolating between 1 and E.
This cover {H}_H≤ E allows us to use the classical comparison map of <cit.> locally. It yields a homeomorphism between each H and the homogeneous spectrum of the -graded endomorphism ring ^_H() in the localization (E)H. In compact form, this can be rephrased as follows (a Dirac scheme is to a usual scheme what a -graded ring is to a non-graded one):
[<Ref>]
The space ((E)), together with the sheaf of -graded rings obtained locally from endomorphisms of the unit, is a Dirac scheme.
§.§ Elementary abelian take-home
Let us ponder the -graded endomorphism ring of the unit ^() for a moment.
Recall from <cit.> that the spectrum of the derived category, (( E)), is homeomorphic to the homogeneous spectrum of
the cohomology ring ^(E,)≅^_( E)(),
see <cit.>.
Such a result cannot hold for (E) since the ring ^_(E)()= is too small to provide geometric information.
So we have developed two substitutes.
Our first approach is to replace the usual -graded ring ^() by a richer multi-graded ring involving twists.
This leads us to twisted cohomology (E) and to <Ref>.
The second approach is to hope that the endomorphism ring ^(), although useless globally, becomes rich enough to control the topology locally on , without leaving the world of -graded rings. This is what we achieve in <Ref> thanks to the open cover {H}_H≤ E.
As can be expected, the two proofs are intertwined.
§.§ Touching ground
Combining <Ref> ultimately describes the topological space for all G, in terms of homogeneous spectra of graded rings.
In <Ref> we improve and apply these results as follows.
In <Ref>, we explain how to go from the `local' rings ^_H() over the open H, for each subgroup H≤ E, to the `global' topology of .
In <Ref>, we give a finite presentation by generators and relations of the reduced -algebra (^_H())_, generalizing the usual one for cohomology.
In <Ref>, we express for a general finite group G as the quotient of a disjoint union of for the maximal elementary abelian p-sections E of G by maximal relations.
In <Ref>, we prove that the irreducible components of correspond to the maximal elementary abelian p-sections of G up to conjugation. It follows that the Krull dimension of is the sectional p-rank of G, the maximal rank of elementary abelian p-sections.
(For comparison, recall that for the derived category these irreducible components correspond to maximal elementary abelian p-subgroups, not sections, and the Krull dimension is the usual p-rank.)
And of course, we discuss examples. Using our techniques, we compute for some notable groups G, in particular Klein-four (<Ref>) and the dihedral group (<Ref>).
The latter will lead us to the following picture, whose precise meaning will be explained in <Ref>.
Hopefully, its beauty will entice the reader to proceed beyond the present introduction.
--
§.§ The toolbox
The outline of the paper should now be clear from the above discussion and the #TOCtable of content presented upfront.
It would be an oversell to pretend that Part II is a stand-alone paper.
We import several technical results from Part I, as black boxes.
The ones invoked most often are gathered in <Ref>.
Here is some standard notation used throughout the text.
We write for the tt-spectrum of a tt-category .
For an object x∈, we write (x)=∈x∈ to denote the open complement of (x).
We write p(G) for the set of p-subgroups of G and p(G)/_G for its G-orbits under G-conjugation.
For each subgroup H≤ G, its Weyl group is =(N_G H)/H where N_GH=g∈ GH^g=H.
Let us remind the reader of the essentials of Part I <cit.>.
The canonical localization Υ_G(G)() gives us an open piece :=(())≅(^(G,)) of the spectrum, that we call the `cohomological open'. We write υ_G=(Υ_G)G for the inclusion.
For every H∈p(G) we denote by Ψ^H(G)→() the modular H-fixed-points tt-functor constructed in <cit.>. It is characterized by Ψ^H((X))≃(X^H) on permutation modules and by the same formula degreewise on complexes.
We write Ψ̌^H=Υ_∘Ψ^H for the composite (G)→()(()) all the way down to the derived category of .
For every H∈p(G), the tt-prime (H)=(Ψ̌^H) is a closed point of .
It is also (H)=(^H) where ^H=^_1∘Ψ^H(G)→()→().
All closed points of are of this form by <cit.>.
We write ψ^H=(Ψ^H)(())→ for the continuous map induced by Ψ^H and ψ̌^H=(Ψ̌^H)υ(())ψ^H for its restriction to the cohomological open of .
If we need to specify the ambient group we write ψ^H G for ψ^H, etc.
We saw in <cit.> that ψ^H is a closed map, and a closed immersion if H G is normal.
Every prime ∈ is of the form =_G(H,):=ψ̌^H() for a p-subgroup H≤ G and a point ∈ in the cohomological open of the Weyl group of H, in a unique way up to G-conjugation; see <cit.>.
Hence the pieces G(H):=ψ̌() yield a partition =⊔_H∈p(G)/_GG(H) into relatively open strata G(H), homeomorphic to .
The crux of the problem is to understand how these strata G(H)≃ attach together topologically, to build the space .
The authors thank Ivo Dell'Ambrogio and Beren Sanders for comments and suggestions.
§ THE COLIMIT THEOREM
To reduce the determination of to the elementary abelian case, we invoke the category of elementary abelian p-sections of a finite group G.
Recall that a section of G is a pair (H,K) of subgroups with K normal in H.
We denote by the category whose objects are pairs (H,K) where K H are p-subgroups of G such that H/K is elementary abelian.
Morphisms (H,K)→ (H',K') are defined to be elements g∈ G such that
H^g∩ K'≤ K^g≤ H^g≤ H'.
Composition of morphisms is defined by multiplication in G.
Let us highlight three types of morphisms in .
*
We have an isomorphism g (H,K) (H^g,K^g) in for every g∈ G.
Intuitively, we can think of this as the group isomorphism c_g H/K H^g/K^g.
*
For every object (H',K') in and every subgroup H≤ H', we have a well-defined object (H,H∩ K') and the morphism 1 (H,H∩ K')→ (H',K').
Intuitively, we think of it as the inclusion H/(H∩ K') H'/K' of a subgroup.
*
For (H,K) in and a subgroup L̅=L/K of H/K, for K≤ L≤ H, there is another morphism in associated to , namely 1(H,L)→ (H,K).
This one does not correspond to an intuitive group homomorphism H/L H/K, as K is smaller than L.
Instead, H/L is the quotient of H/K by L̅ H/K.
This last morphism will be responsible for the modular L̅-fixed-points functor.
Every morphism g (H,K)→ (H',K') in is a composition of three morphisms of the above types <ref>, <ref> and <ref> in the following canonical way:
@R=0em@C=1em
H
H
^gH'
H'
∇ [r]^-<ref> ∇ [r]^-<ref> ∇ [r]^-<ref> ∇
K
H∩^gK'
^gK'
K'
where the first two are given by 1∈ G and the last is given by g.
In particular, the rank of the elementary abelian group H/K increases or stays the same along any morphism (H,K)→ (H',K') in this category, as this is true with <ref>, <ref> and <ref>.
To every object (H,K) in , we associate the tt-category (H/K)=((H/K;)).
For every morphism g (H,K)→ (H',K') in , we set K̅=K/(H∩^gK') and we define a functor of tt-categories:
(g)(H'/K')c_g^*(^gH'/^gK')(H/(H∩^gK'))Ψ^K̅(H/K)
using that H/(H∩^gK') is a subgroup of ^gH'/^gK' for the restriction, and using that (H/(H∩^gK'))/K̅=H/K for the modular fixed-points functor Ψ^K̅.
It follows from <cit.> that (-) is a contravariant (pseudo) functor on with values in tt-categories:
.
We can compose this with (-), which incidentally makes the coherence of the 2-isomorphisms accompagnying (<ref>) irrelevant, and obtain a covariant functor from to topological spaces.
Let us compare this diagram of spaces (and its colimit) with the space .
For each (H,K)∈, we have a tt-functor
(G)^G_H(H)Ψ^K(H/K)
which yields a natural transformation from the constant functor (H,K)↦(G) to the functor → of (<ref>).
The above Ψ^K is Ψ^K H.
Since H≤ N_G K, the tt-functor (<ref>) is also ^_H/K∘Ψ^K G(G)→()→(H/K).
Applying (-) to this observation, we obtain a commutative square:
@C=4em@R=2em((H/K))
[r]^-ψ^K H[d]_-ρ_H/K[rd]^(.6)φ_(H,K) ((H))
[d]^-ρ_H
(())
[r]_-ψ^K G
whose diagonal we baptize φ_(H,K).
In summary, we obtain a continuous map
φ_(H,K)∈((H/K))→
whose component φ_(H,K) at (H,K) is the diagonal map in (<ref>).
* Each of the maps ((g))((H/K))→((H'/K')) in the colimit diagram (<ref>) is a closed immersion.
* Each of the components φ_(H,K)((H/K))→ of (<ref>) is closed and preserves the dimension of points (the Krull dimension of their closure).
These statements follow from two facts, see <Ref>:
When N G is normal the map ψ^N((G/N))((G)) is a closed immersion.
When H≤ G is any subgroup, the map ρ_H((H))→((G)) is closed, hence lifts specializations, and it moreover satisfies `Incomparability' by <cit.>.
We are now ready to prove <Ref>:
For any finite group G, the map φ in (<ref>) is a homeomorphism.
Each component φ_(H,K) is a closed map and thus φ is a closed map.
For surjectivity, by <Ref>, we know that is covered by the subsets ψ^K(), over all p-subgroups K≤ G. Hence it suffices to know that the (ρ_E) cover =(()) as E≤ runs through all elementary p-subgroups. (Such an E must be of the form H/K for an object (H,K)∈.) This holds by a classical result of Quillen <cit.>; see <cit.>.
The key point is injectivity.
Take ∈((H/K)) and '∈((H'/K')) with same image in .
Write =_H/K(L/K,) for suitable arguments (K≤ L≤ H, ∈H/L) and note that the map induced by 1(H,L)→ (H,K) in sends _H/L(1,)∈((H/L)) to . So we may assume L=K.
By <cit.>, the image of =_H/K(1,) in is _G(K,ρ̅()) where ρ̅H/K→ is induced by restriction.
Similarly, we may assume '=_H'/K'(1,') for '∈H'/K' and we have _G(K,ρ̅())=_G(K',ρ̅'(')) in and need to show that and ' are identified in the colimit (<ref>).
By <cit.>, the relation _G(K,ρ̅())=_G(K',ρ̅'(')) can only hold because of G-conjugation, meaning that there exists g∈ G such that K'=K^g and ρ̅'(')=ρ̅()^g in GK'.
Using the map g(H,K)→ (H^g,K^g) in we may replace H,K, by H^g,K^g,^g and reduce to the case K=K'. In other words, we have two points =_H/K(1,)∈((H/K)) and '=_H'/K(1,')∈((H'/K)) corresponding to two p-subgroups H,H'≤ G containing the same normal subgroup K and two cohomological primes ∈H/K and '∈H'/K such that ρ̅()=ρ̅'(') in under the maps ρ̅ and ρ̅' induced by restriction along H/K≤ and H'/K≤ respectively.
If we let G̅==(N_G K)/K, we have two elementary abelian p-subgroups H̅=H/K and H̅'=H'/K of G̅, each with a point in their cohomological open, ∈H̅ and '∈H̅', and those two points have the same image in the cohomological open G̅ of the `ambient' group G̅. By Quillen <cit.> (or <cit.>) again, we know that this coalescence must happen because of an element g̅∈G̅, that is, a g∈ N_G K, and a prime ∈H̅∩^gH̅' that maps to and to ' under the maps H̅∩^gH̅'̅→H̅ and H̅∩^gH̅'̅→H̅'̅ respectively.
But our category contains all such conjugation-inclusion morphisms coming from the orbit category of G. Specifically, we have two morphisms
1(H∩^gH',K)→ (H,K) and g(H∩^gH',K)→ (H',K) in , under which the point _(H∩^gH')/K(1,) maps to _H/K(1,)= and _H'/K(1,')=' respectively. This shows that =' in the domain of (<ref>) as required.
By <cit.>, the space is noetherian. Hence the topology is entirely characterized by the inclusion of primes.
Now, suppose that is the image under φ_(H,K)→ of some '∈ for an elementary abelian subquotient E=H/K corresponding to a section (H,K)∈.
Then the only way for another prime ∈ to belong to the closure of is to be itself the image of some point ' of in the closure of '.
This follows from <Ref>.
In other words, the question of inclusion of primes can also be reduced to the elementary abelian case.
§ INVERTIBLE OBJECTS AND TWISTED COHOMOLOGY
In this section we introduce a graded ring whose homogeneous spectrum helps us understand the topology on ((G)), at least for G elementary abelian.
This graded ring, called the twisted cohomology ring (<Ref>), consists of morphisms between and certain invertible objects.
It all starts in the cyclic case.
Let C_p=σ|σ^p=1 be the cyclic group of prime order p, with a chosen generator.
We write C_p=[σ]/(σ^p-1) as [τ]/τ^p for τ=σ-1.
Then the coaugmentation and augmentation maps become:
η:1↦τ^p-1 C_pand: C_pτ↦ 0.
For p odd, we denote the first terms of the `standard' minimal resolution of by
u_p=(0→ C_pτ C_p→ 0).
We view this in (C_p) with in homological degree zero.
One can verify directly that u_p is ⊗-invertible, with u_p-1=u_p^∨≅(0→η C_pτ C_p→ 0).
Alternatively, one can use the conservative pair of functors ^H(C_p)→() for H∈{C_p , 1}, corresponding to the only closed points (C_p) and (1) of ((C_p)). Those functors map u_p to the ⊗-invertibles and [2] in (), respectively.
For p=2, we have a similar but shorter ⊗-invertible object in (C_2)
u_2=(0→ C_2→ 0)
again with in degree zero.
To avoid constantly distinguishing cases, we abbreviate
2':={[ 2 if p>2; 1 if p=2. ].
For any finite group G and any index-p normal subgroup N, we can inflate the ⊗-invertible u_p of <Ref> along π G G/N≃ C_p to a ⊗-invertible in (G).
Let N G be a normal subgroup of index p. We define
u_N:={[ ⋯→ 0→ (G/N)τ (G/N)→ 0 →⋯ if p is odd; ⋯→ 0 → 0 → (G/N) → 0 →⋯ if p=2 ].-1em
with in degree zero. We also define two morphisms
a_N→ u_N
and
b_N→ u_N[-2']
as follows. The morphism a_N is the identity in degree zero, independently of p:
@C=1em@R=1em[d]_-a_N@[r]|-= ⋯[r] 0 [r] [d] [r] [d]^-1 0 [r] ⋯
u_N @[r]|-= ⋯[r] (G/N) [r]_- [r] 0 [r] ⋯
The morphism b_N is given by η→(G/N) in degree zero, as follows:
@C=1em@R=1em[d]_-b_N@[r]|-= [r] [d]^-η 0 [d]
u_N[-1] @[r]|-= (G/N) [r]_- and@C=1em@R=1em[d]_-b_N@[r]|-= [r] [d]^-η 0 [d] [r] 0 [d]
u_N[-2] @[r]|-= (G/N) [r]_-τ (G/N) [r]_-
where the target u_N is shifted once to the right for p=2 (as in the left-hand diagram above) and shifted twice for p>2 (as in the right-hand diagram).
When p is odd there is furthermore a third morphism c_N→ u_N[-1], that is defined to be η→(G/N) in degree zero.
This c_N will play a lesser role.
In statements made for all primes p, simply ignore c_N in the case p=2 (or think c_N=0).
Here is an example of such a statement, whose meaning should now be clear: The morphisms a_N and b_N, and c_N (for p odd), are inflated from G/N.
Technically, u_N depends not only on an index-p subgroup N G but also on the choice of a generator of G/N, to identify G/N with C_p. If one needs to make this distinction, one can write u_π for a chosen epimorphism π G C_p. This does not change the isomorphism type of u_N, namely (π)=(π') implies u_π≅ u_π'.
(We expand on this topic in <Ref>.)
Let N G be a normal subgroup of index p and let q≥ 1. Then there is a canonical isomorphism in (G)
u_Nq≅(⋯ 0→(G/N) τ(G/N) τ^p-1⋯τ(G/N) → 0⋯ )
where the first (G/N) sits in homological degree 2'· q and sits in degree 0.
It is an exercise over the cyclic group C_p.
Then inflate along G G/N.
The morphism b_N→ u_N[-2'] of <Ref> is a quasi-isomorphism and the fraction
ζ_N:=(b_N[2'])∘ a_N→ u_N [2']
is a well-known morphism ζ_N∈_()(,[2'])=^2'(G,) in the derived category (). For G elementary abelian, these ζ_N generate the cohomology -algebra ^(G,), on the nose for p=2 and modulo nilpotents for p odd.
We sometimes write ζ^+_N=a_N/b_N for ζ_N in order to distinguish it from the inverse fraction ζ^-_N:=b_N/a_N that exists wherever a_N is inverted.
Of course, when both a_N and b_N are inverted, we have ζ^-_N=(ζ^+_N)=ζ_N.
The switch of factors (12) u_N⊗ u_N≅ u_N⊗ u_N can be computed directly to be the identity (over C_p, then inflate).
Alternatively, it must be multiplication by a square-one element of ()=^×, hence ±1.
One can then apply the tensor-functor Ψ^G(G)→(), under which u_N goes to , to rule out -1.
It follows that for p odd, u_N[-1] has switch -1, and consequently every morphism → u_N[-1] must square to zero. In particular c_N⊗ c_N=0.
This nilpotence explains why c_N will play no significant role in the topology.
We can describe the image under modular fixed-points functors of the ⊗-invertible objects u_N and of the morphisms a_N and b_N. (We leave c_N as an exercise.)
Let H G be a normal p-subgroup. Then for every index-p normal subgroup N G, we have in (G/H)
Ψ^H(u_N)≅{[ u_N/H if H≤ N; if H≰N ].
and under this identification
Ψ^H(a_N)={[ a_N/H if H≤ N; 1_ if H≰N ].
and
Ψ^H(b_N)={[ b_N/H if H≤ N; 0 if H≰N. ].
Direct from <Ref> and Ψ^H((X))≅(X^H) for X=G/N.
For restriction, there is an analogous pattern but with the cases `swapped'.
Let H≤ G be a subgroup. Then for every index-p normal subgroup N G, we have in (H)
^G_H(u_N)≅{[ [2'] if H≤ N; u_N∩ H if H≰N ].
and under this identification
^G_H(a_N)={[ 0 if H≤ N; a_N∩ H if H≰N ].
and
^G_H(b_N)={[ 1_ if H≤ N; b_N∩ H if H≰N. ].
Direct from <Ref> and the Mackey formula for ^G_H((G/N)).
We can combine the above two propositions and handle Ψ^H for non-normal H, since by definition Ψ^H G=Ψ^H N_G H∘^G_N_G H.
Here is an application of this.
Let H≤ G be a p-subgroup and N G of index p. Recall the `residue' tt-functor ^H=_1∘Ψ^H(G)→() at the closed point (H).
* If H≰N then ^H(a_N) is an isomorphism.
* If H≤ N then ^H(b_N) is an isomorphism.
We apply <Ref> for N_G H≤ G and <Ref> for H N_G H.
For (a), H≰N forces N_G H≰N and H≰N∩ N_G H. Hence Ψ^H(a_N)=Ψ^H N_G H_N_G H(a_N)=Ψ^H N_G H(a_N∩ N_G H)=1_ is an isomorphism.
Similarly for (b), if N_G H≤ N then Ψ^H(b_N) is an isomorphism and if N_G H≰N it is the quasi-isomorphism b_(N∩ N_G H)/H.
Thus ^H(b_N) is an isomorphism in ().
Let us now prove that the morphisms a_N and b_N, and c_N (for p odd), generate all morphisms from the unit to tensor products of u_N's.
This is a critical fact.
Let N_1,…,N_ℓ be index-p normal subgroups of G and abbreviate u_i:=u_N_i for i=1,…,ℓ and similarly a_i:=a_N_i and b_i:=b_N_i and c_i:=c_N_i (see <Ref>).
Let q_1,…,q_ℓ∈ be non-negative integers and s∈. Then every morphism f→ u_1q_1⊗⋯⊗ u_ℓq_ℓ[s] in (G) is a -linear combination of tensor products of (a `polynomial' in) the morphisms a_i and b_i, and c_i (for p odd).
We proceed by induction on ℓ. The case ℓ=0 is just _(G)^()=. Suppose ℓ≥ 1 and the result known for ℓ-1.
Up to reducing to ℓ-1, we can assume that the N_1,…,N_ℓ are all distinct.
Set for readability
v:=u_1q_1⊗⋯⊗ u_ℓ-1q_ℓ-1[s],
N:=N_ℓ,
u:=u_ℓ=u_Nand
q:=q_ℓ
so that f is a morphism of the form
f→ v⊗ uq .
We then proceed by induction on q≥ 0. We assume the result known for q-1 (the case q=0 holds by induction hypothesis on ℓ).
The proof will now depend on p.
Suppose first that p=2. Consider the exact triangle in (G)
@C=.9em@R=1.5em
u(q-1)@[r]|-=[d]_-a_N ⋯ 0 [r]
0 [r] [d]
(G/N) [r]^-τ@=[d]
⋯[r]^-τ (G/N) [r]^-@=[d]
[r] @=[d]
0 ⋯
uq@[r]|-=[d]_- ⋯ 0 [r]
(G/N) [r]^-τ@=[d]
(G/N) [r]^-τ[d]
⋯[r]^-τ (G/N) [r]^-[d]
[r] [d]
0 ⋯
(G/N)[q] @[r]|-= ⋯ 0 [r]
(G/N) [r]
0 [r]
⋯[r]
0 [r]
0 [r]
0 ⋯-1em
where is in degree zero. (See <Ref>.)
Tensoring the above triangle with v and applying _G(,-):=_(G)(,-) we get an exact sequence
@C=1.5em@R=.4em_G(,v⊗ u(q-1))[r]^-· a_N _G(,v⊗ uq)[r] _G(,v⊗(G/N)[q])@=[d]
f @[r]|-⟼@[u]|-90∈ -1em f'∈_N(,^G_N(v)[q])
-.8em
Our morphism f belongs to the middle group. By adjunction, the right-hand term is _N(,^G_N(v)[q]). Now since all N_1,…,N_ℓ=N are distinct, we can apply <Ref> to compute ^G_N(v) and we know by induction hypothesis (on ℓ) that the image f' of our f in this group _N(,^G_N(v)[q]) is a -linear combination of tensor products of a_N_i∩ N and b_N_j∩ N for 1≤ i,j≤ℓ-1, performed over the group N. We can perform the `same' -linear combination of tensor products of a_i's and b_j's over the group G, thus defining a morphism f”∈_G(,v[q]). We can now multiply f” with b_Nq→ u_Nq[-q] to obtain a morphism f” b_N^q in the same group _G(,v⊗ uq) that contains f. Direct computation shows that the image of this f”b_N^q in _N(,_N(v)[s]) is also equal to f'. The key point is that b_Nq is simply η→(G/N) in degree q and this η is also the unit of the ^G_N_N^G adjunction.
In other words, the difference f-f” b_N^q comes from the left-hand group _G(,v⊗ u(q-1)) in the exact sequence (<ref>), reading
f=f”b_N^q+f”' a_N
for some f”'∈_G(,v⊗ u(q-1)). By induction hypothesis (on q), f”' is a polynomial in a_i's and b_j's. Since f” also was such a polynomial, so is f.
The proof for p odd follows a similar pattern of induction on q, with one complication. The cone of the canonical map a_N u_N(q-1)→ u_Nq is not simply (G/N) in a single degree as in (<ref>) but rather the complex
C:=(⋯→ 0→(G/N) τ (G/N)→ 0→⋯)
with (G/N) in two consecutive degrees 2q and 2q-1. So the exact sequence
@C=1.5em@R=.4em_G(,v⊗ u(q-1))[r]^-· a_N _G(,v⊗ uq)[r] _G(,v⊗ C)
has a more complicated third term than the one of (<ref>). That third term _G(,v⊗ C) itself fits in its own exact sequence associated to the exact triangle (G/N)[2q-1]τ(G/N)[2q-1]→ C →(G/N)[2q]. Each of the terms _G(,v⊗(G/N)[∗])≅_N(,^G_N(v)[∗]) can be computed as before, by adjunction.
The image of f in _N(,^G_N(v)[2q]) can again be lifted to a polynomial f'b^q_N:→ v⊗ uq so that the image of the difference f-f'b^q_N in _G(,v⊗ C) comes from some element in _N(,^G_N(v)[2q-1]).
That element may be lifted to a polynomial f”b^q-1_Nc_N:→ v⊗ uq, and we obtain
f=f'b_N^q+f”b_N^q-1c_N+f”'a_N
for some f”'∈_G(,v⊗ u(q-1)) similarly as before.
We can now assemble all the hom groups of <Ref> into a big graded ring.
We denote the set of all index-p normal subgroups of G by
=(G):=N G [G:N]=p.
Let ^=^(G)={q(G)→} be the monoid of twists, tuples of non-negative integers indexed by this finite set.
Consider the (×^)-graded ring
(G)=(G;) := ⊕_s∈ ⊕_q∈^_(G)( , ⊗_N∈u_Nq(N)[s]).
Its multiplication is induced by the tensor product in (G).
We call (G) the (permutation) twisted cohomology ring of G.
It is convenient to simply write
(q)=⊗_N∈(u_N)q(N)
for every twist q∈^(G) and thus abbreviate ^s,q(G)=(,(q)[s]).
The graded ring (G) is graded-commutative by using only the parity of the shift, not the twist; see <Ref>.
In other words, we have
h_1· h_2= (-1)^s_1· s_2 h_2· h_1whenh_i∈^s_i,q_i(G).
For instance, for p odd, when dealing with the morphisms a_N and b_N, which land in even shifts of the object u_N, we do not have to worry too much about the order.
This explains the `unordered' notation ζ_N=a_N/b_N used in <Ref>.
The critical <Ref> gives the main property of this construction:
The twisted cohomology ring (G) of <Ref> is a -algebra generated by the finitely many elements a_N and b_N, and c_N (for p odd), of <Ref>, over all N G of index p. In particular (G) is noetherian.
The reader can verify by hand that (C_2)=[a_N,b_N], without relations, and that (C_p)=[a_N,b_N,c_N]/c_N^2 for p odd, where in both cases N=1 is the only N∈(C_p).
This example is deceptive, for the {a_N,b_N,c_N}_N∈ usually satisfy some relations, as the reader can already check for G=C_2× C_2 for instance.
We systematically discuss these relations in <Ref>.
We conclude this section with some commentary.
The name `cohomology' in <Ref> is used in the loose sense of a graded endomorphism ring of the unit in a tensor-triangulated category.
However, since we are using the tt-category (G) and not (), the ring (G) is quite different from ^(G,) in general.
In fact, (G) could even be rather dull.
For instance, if G is a non-cyclic simple group then (G)=∅ and (G)=.
We will make serious use of (G) in <Ref> to describe for G elementary abelian.
In that case, ^(G,) is a localization of (G).
See <Ref>.
By <Ref>, there is no `collision' in the twists: If there is an isomorphism (q)[s]≃(q')[s'] in (G) then we must have q=q' in ^ and s=s' in .
The latter is clear from ^G(u_N)≅ in (), independently of N. We then conclude from ^N((q))≃[2'q(N)] in (), for each N∈.
We only use positive twists q(N) in (<ref>).
The reader can verify that already for G=C_p cyclic, the ^2-graded ring ⊕_(s,q)∈^2(,u_pq[s]) is not noetherian.
See for instance <cit.> for p=2.
However, negatively twisted elements tend to be nilpotent.
So the ×^-graded version of (G) may yield the same topological information as our ×^-graded one.
We have not pushed this investigation of negative twists, as it brought no benefit to our analysis.
§ AN OPEN COVER OF THE SPECTRUM
In this section, we extract some topological information about from the twisted cohomology ring (G) of <Ref> and the maps a_N and b_N of <Ref>, associated to every index-p normal subgroup N in =(G).
Recall from <cit.> that we can use tensor-induction to associate to every subgroup H≤ G a Koszul object [G]H=_H^G(0→1→ 0). It generates in (G) the tt-ideal (^G_H), see <cit.>:
[G]H_(G)=(^G_H(G)→(H)).
Let N G be a normal subgroup of index p. Then we have:
*
In (G), the object (a_N) generates the same thick subcategory as (G/N). In particular, ((a_N))=((G/N)).
*
In (G), the object (b_N) generates the same thick tensor-ideal as [G]N. In particular, ((b_N))=([G]N)=((^G_N)).
For p=2, we have (a_N)=(G/N)[1] so the first case is clear. For p odd, we have (a_N)[-1]≃(0→(G/N)τ(G/N)→ 0)=(τ(G/N)). Hence (a_N)∈((G/N)). Conversely, since τ^p=0, the octahedron axiom inductively shows that (G/N)∈((τ(G/N))). This settles <ref>.
For <ref>, the complex s:=(b_N)[2'] becomes split exact when restricted to N since it is inflated from an exact complex on G/N. In degree one we have s_1=(G/N), whereas s_0=. Hence <cit.> tells us that the complex s generates the tt-ideal (^G_N(G)→(N)). We conclude by (<ref>).
Let N G be of index p. Then (a_N)⊗(b_N)=0.
By <Ref> it suffices to show (G/N)⊗[G]N=0. By Frobenius, this follows from ^G_N([G]N)=0, which holds by (<ref>).
We now relate the spectrum of (G) to the homogeneous spectrum of (G), in the spirit of <cit.>. The comparison map of <cit.> is denoted by ρ^ but we prefer a more descriptive notation (and here, the letter ρ is reserved for ()).
There is a continuous `comparison' map
_G((G))
mapping a tt-prime to the ideal generated by those homogeneous f∈(G) whose cone does not belong to .
It is characterized by the fact that for all f
_G(Z(f))=((f))=f is not invertible in (G)/
where Z(f)=f∈ is the closed subset of ((G)) defined by f.
The fact that the homogeneous ideal _G() is prime comes from <cit.>.
Equation (<ref>) is essentially a reformulation of the definition.
The usual notation for Z(f) would be V(f), and D(f) for its open complement. Here, we already use V for G and for G(H), and the letter D is certainly overworked in our trade.
So we stick to Z(f) and Z(f)^c.
In view of <Ref>, for any f, the open subset of
(f):=((f))=f is invertible in (G)/
is the preimage by _G→((G)) of the principal open Z(f)^c=f∉.
It is the open locus of where f is invertible.
In particular, our distinguished elements a_N and b_N (see <Ref>) give us the following open subsets of , for every N∈(G):
(a_N)=_G(Z(a_N)^c), the open where a_N is invertible, and
(b_N)=_G(Z(b_N)^c), the open where b_N is invertible.
Since (c_N)^2=0 by <Ref>, we do not have much use for (c_N)=∅.
With notation as above, we have for every N G of index p
(a_N)∪(b_N)=.
We compute (a_N)∪(b_N)=((a_N))∪((b_N))=((a_N)⊗(b_N))=(0_(E))=, using <Ref>.
Every object u_N is not only ⊗-invertible in (G) but actually locally trivial over , which is a stronger property in general tt-geometry.
Indeed, <Ref> tells us that around each point of , either u_N becomes isomorphic to via a_N, or u_N becomes isomorphic to [2'] via b_N.
This holds for one invertible u_N.
We now construct a fine enough open cover of such that every u_N is trivialized on each open.
Let H≤ G be a p-subgroup. Define an open of by
H=[G]H:=⋂_N∈H≰N(a_N) ∩⋂_N∈H≤ N(b_N).
Then the closed point (H)∈ belongs to this open H. Consequently {H}_H∈p(G) is an open cover of .
The point (H)=(^H) belongs to H by <Ref>.
It follows by general tt-geometry that {H}_H is a cover:
Let ∈; there exists a closed point in , that is, some (H) that admits as a generalization; but then (H)∈H forces ∈H since open subsets are generalization-closed.
For a p-group, we now discuss H at the two extremes H=1 and H=G.
Let G be a p-group and F=F(G)=∩_N∈(G)N be its Frattini subgroup. So F G and G/F is the largest elementary abelian quotient of G.
Let G be a p-group with Frattini subgroup F.
The closed complement of the open [G]1 is the support of [G]F, the closed support of the tt-ideal (^G_F) of (G).
In particular, if G is elementary abelian then [G]1 is equal to the cohomological open G=(( G))≅(^(G,)).
By definition, 1=∩_N∈(b_N). By <Ref>, its closed complement is ∪_N∈([G]N).
By <cit.>, for every K≤ G
([G]K)=(H,)H≰_G K
(taking all possible ∈). It follows that our closed complement of 1 is
∪_N∈(G)([G]N) (<ref>)(H,)∃ N∈(G) such that H≰_G N
=(H,)H≰∩_N∈(G)N
=(H,)H≰F(<ref>)([G]F).
The statement with (^G_F) then follows from (<ref>). Finally, if G is elementary abelian then F=1 and (^G_1)=(G) is the tt-ideal of acyclic complexes. The complement of its support is ((G)/(G))=(())=G.
In the above proof, we showed that ∪_N∈(N)=(F) thanks to the fact that ∩_N∈N=F. So the very same argument gives us:
Let G be a p-group and let N_1,…,N_r∈(G) be some index-p subgroups such that N_1∩⋯∩ N_r is the Frattini subgroup F.
(This can be realized with r equal to the p-rank of G/F.)
Then [G]1=∩_i=1^r(b_N_i) already.
Hence if ∈(b_N_i) for all i=1,…,r then ∈(b_N) for all N∈(G).
Let us turn to the open [G]H for the p-subgroup at the other end: H=G.
Let G be a p-group. Then the complement of the open [G]G is the union of the images of the spectra ((H)) under the maps ρ_H=(_H), over all the proper subgroups H≨ G.
By <Ref>, the closed complement of G=∩_N∈(a_N) equals ∪_N∈((G/N)).
For every H≤ G, we have ((G/H))=(ρ_H); see <cit.> if necessary.
This gives the result because restriction to any proper subgroup factors via some index-p subgroup, since G is a p-group.
Let G be a p-group.
This open complement G of ∪_H≨ G(ρ_H) could be called the `geometric open'.
Indeed, the localization functor
Φ^G(G)(G)/(G/H)| H≨ G
corresponding to G is analogous to the way the geometric fixed-points functor is constructed in topology. For more on this topic, see <cit.>.
For G not a p-group, the open G is not defined (we assume H∈p(G) in <Ref>) and the `geometric open' is void anyway as we have (ρ_P)= for any p-Sylow P≨ G.
The strategy to analyze non-p-groups is to first descend to the p-Sylow, using that _P is faithful.
We saw in <Ref> that the complement of G is covered by the images of the closed maps ρ_H=(_H) for H≨ G. We could wonder whether another closed map into covers G itself. The answer is the closed immersion ψ^F((G/F))((G)) induced by the modular fixed-points functor Ψ^F with respect to the Frattini subgroup F G.
This can be deduced from the results of <Ref> or verified directly, as we now outline.
Indeed, every prime =_G(K,) for K≤ G and ∈ comes by Quillen from some elementary abelian subgroup E=H/K≤=(N_G K)/K.
One verifies that unless N_G K=G and H=G, the prime belongs to the image of ρ_G' for a proper subgroup G' of G. Thus if belongs to G, we must have E=H/K=G/K for K G. Such a K must contain the Frattini and the result follows.
§ TWISTED COHOMOLOGY UNDER TT-FUNCTORS
Still for a general finite group G, we gather some properties of the twisted cohomology ring (G) introduced in <Ref>.
We describe its behavior under specific tt-functors, namely restriction, modular fixed-points and localization onto the open subsets [G]H.
Recall that =(G)=N G[G:N]=p.
Twisted cohomology (G) is graded over a monoid of the form .
The ring homomorphisms induced by the above tt-functors will be homogeneous with respect to a certain homomorphism γ on the corresponding grading monoids, meaning of course that the image of a homogeneous element of degree (s,q) is homogeneous of degree γ(s,q).
The `shift' part (in ) is rather straightforward.
The `twist' part (in ^ℓ) will depend on the effect of said tt-functors on the u_N.
Let us start with modular fixed-points, as they are relatively easy.
Let H G be a normal subgroup.
By <Ref>, the tt-functor Ψ^H(G)→(G/H) maps every u_N for N≱H to , whereas it maps u_N for N≥ H to u_N/H.
This defines a homomorphism of grading monoids
γ=γ_Ψ^H×^(G)→×^(G/H)
given by γ(s,q)=(s,q̅) where q̅(N/H)=q(N) for every N/H∈(G/H). In other words, q↦q̅ is simply restriction ^(G)^(G/H) along the canonical inclusion (G/H)(G).
By <Ref>, for every twist q∈^(G), we have a canonical isomorphism Ψ^H((q))≅(q̅).
Therefore the modular fixed-points functor Ψ^H defines a ring homomorphism also denoted
@R=.1emΨ^H-2em
(G) [r]^- (G/H)
(f(q)[s])
@|->[r]
(Ψ^H(f)Ψ^H((q)[s]) ≅(q̅)[s])
which is homogeneous with respect to γ_Ψ^H in (<ref>).
Restriction is a little more subtle, as some twists pull-back to non-trivial shifts.
Let α G'→ G be a group homomorphism. Restriction along α defines a tt-functor α^*=^α_G'∘^G_α(G)→(α)→(G').
Combining <Ref> for ^G_α with the obvious behavior of the u_N under inflation (by construction), we see that α^*(u_N)≅[2'] if N≥α and α^*(u_N)≅ u_α(N) if N≱α (which is equivalent to α(N)∈(G')).
Hence for every (s,q)∈×^(G) we have a canonical isomorphism α^*((q)[s])≅(q')[s'] where s'=s+2'∑_N≥αq(N) and q'(G')→ is defined for every N'∈(G') as
q'(N')=∑_N∈(G) s.t. α(N)=N'q(N).
(In particular q'(N')=0 if N'≱(α).)
These formulas define a homomomorphism (s,q)↦ (s',q') of abelian monoids that we denote
γ=γ_α^*×^(G)→×^(G').
The restriction functor α^* defines a ring homomorphism
@R=.1emα^*-2em
(G) [r]
(G')
(f(q)[s])
@|->[r]
(α^*(f)α^*((q)[s]) ≅(q')[s'])
which is homogeneous with respect to γ_α^* in (<ref>).
For instance, α G G/H can be the quotient by a normal subgroup H G. In that case α^* is inflation, which is a section of modular fixed-points Ψ^H. It follows that the homomorphism Ψ^H in (<ref>) is split surjective. (This also means that the composed effect on gradings γ_Ψ^H∘γ_α^*=𝕀 is trivial.)
Without changing the group G, we can also localize the twisted cohomology ring (G) by restricting to an open H of , as defined in <Ref>.
Recall the elements a_N,b_N∈(G) from <Ref>.
Let H≤ G be a p-subgroup. Let S_H⊂(G) be the multiplicative subset of the graded ring (G) generated by all a_N such that H≰N and all b_N such that H≤ N, for all N∈(G). Recall that the a_N and b_N are central by <Ref>.
We define a -graded ring
:=((G)[S_H])
as the twist-zero part of the localization of (G) with respect to S_H.
Explicitly, the homogeneous elements of consist of fractions f/g where f,g∈(G) are such that g→(q)[t] is a product of the chosen a_N,b_N in S_H, meaning that (q)[t] is the ⊗-product of the corresponding u_N for a_N and u_N[-2'] for b_N, whereas f→(q)[s] is any morphism in (G) with the same -twist q as the denominator.
Thus is -graded by the shift only: The degree of f/g is the difference s-t between the shifts of f and g.
It follows from <Ref> (and <Ref>) that the -graded ring is generated as a -algebra by the elements
ζ^+_N, ξ^+_NH≤ N∪ζ^-_N, ξ^-_NH≰N
where ζ^+_N=a_N/b_N is of degree +2' and ζ^-_N=b_N/a_N of degree -2' as in <Ref>, and where (only for p odd) the additional elements ξ^±_N are ξ^+_N:=c_N/b_N of degree +1, and ξ^-_N:=c_N/a_N of degree -1. (For p=2, simply ignore the ξ^±_N.)
In general, all these elements satisfy some relations; see <Ref>.
Beware that here ξ^-_N is never the inverse of ξ^+_N. In fact, both are nilpotent.
In fact, we can perform the central localization of the whole category (G)
(H)=_G(H):=(G)[S_H]
with respect to the central multiplicative subset S_H of <Ref>.
The tt-category (H)=(G)[S_H] has the same objects as (G) and morphisms x→ y of the form f/g where g→ u belongs to S_H, for u a tensor-product of shifts of u_N's according to g (as in <Ref>) and where f x→ u⊗ y is any morphism in (G) with `same' twist u as the denominator g.
This category (G)[S_H] is also the Verdier quotient of (G) by the tt-ideal (g)g∈ S_H and the above fraction f/g corresponds to the Verdier fraction
x [r]^-f u ⊗ y
y. [l]_-g⊗ 1
See <cit.> if necessary.
The -graded endomorphism ring ^_(H)() of the unit in (H)=(G)[S_H] is thus the -graded ring (S_H(G))= of <Ref>.
There is a general localization U of a tt-category over a quasi-compact open U⊆ with closed complement Z. It is defined as U=(/_Z)^♮.
If we apply this to U=H, we deduce from (<ref>) that U=∩_g∈ S_H(g) has closed complement Z=∪_g∈ S_H((g)) whose tt-ideal (G)_Z is the above (g)g∈ S_H.
In other words, the idempotent-completion of our _G(H)=(G)[S_H] is exactly (G)H.
As with any localization, we know that (_G(H)) is a subspace of , given here by U=∩_g∈ S_H(g)=H.
For G=E elementary abelian and the subgroup H=1, the category _E(1)=(E)1 in <Ref> is simply the derived category _E(1)=(E), by <Ref>.
In that case, E(1)≅^(E;) is the actual cohomology ring of E.
Since H=1≤ N for all N, we are inverting all the b_N and no a_N.
As noted in <Ref>, we obtain the same ring (the cohomology of E) as soon as we invert enough b_N_1,…,b_N_r, namely, as soon as N_1∩⋯∩ N_r=1.
We again obtain an induced homomorphism of multi-graded rings.
Let H≤ G be a p-subgroup and consider the above central localization (-)H(G)_G(H).
As explained in <Ref>, the morphisms a_N and b_N give us explicit isomorphisms (u_N)H≅ if N≱H and (u_N)H≅[2'] if N≥ H.
This yields a homomorphism on the grading
γ=γ_H×^(G)→
defined by γ(s,q)=s+2'∑_N≥ Hq(N) and we obtain a ring homomorphism
(-)H(G) ^__G(H)()=
which is homogeneous with respect to the homomorphism γ_H of (<ref>).
It is easy to verify that the continuous maps induced on homogeneous spectra by the ring homomorphisms constructed above are compatible with the comparison map of <Ref>.
In other words, if F(G)→(G') is a tt-functor and if the induced homomorphism F(G)→(G') is homogenous with respect to γ=γ_F×^(G)→×^(G'), for instance F=Ψ^H or F=α^* as in <Ref>, then the following square commutes:
@C=4em((G')) [r]^-(F)[d]^-_G' ((G)) [d]^-_G
((G')) @->[r]^-(F) ((G)).
This follows from F((f))≃(F(f)) in (G') for any f∈(G).
Similarly, for every H∈p(G) the following square commutes
-2em [G]H=(_G(H)) @^(->[r] [d]_-_(H) ((G)) [d]^-_G
() @^(->[r]
((G))
where the left-hand vertical map is the classical comparison map of <cit.> for the tt-category _G(H) and the ⊗-invertible [1].
The horizontal inclusions are the ones corresponding to the localizations with respect to S_H, as in <Ref>.
In fact, it is easy to verify that the square (<ref>) is cartesian, in view of [G]H=⋂_g∈ S_H(g)=⋂_g∈ S_H_G(Z(g)^c) by <Ref> and (<ref>).
We can combine the above functors. Here is a useful example.
Let H G be a normal subgroup such that G/H is elementary abelian.
Then we have a commutative square
@C=1.5em-2emG/H=(((G/H))) [r]^-ψ̌^H[d]_-_((G/H))^-≃ ((G)) [d]^-_G
(^(G/H,k)) @^(->[r]
((G))
and in particular, its diagonal _G∘ψ̌^H is injective.
The functor Ψ̌^H(G)→((G/H)) is the modular fixed-points functor Ψ^H(G)→(G/H) composed with Υ_G/H(G/H)((G/H)), which is the central localization (-)1 over the cohomological open, by <Ref>; see <Ref>.
Thus we obtain two commutative squares (<ref>) and (<ref>):
@C=1.5em(((G/H))) @^(->[r]^-υ_G/H[d]_-_((G/H))^-≃ ((G/H)) [d]^-_G/H@->[r]^(.5)ψ^H ((G)) [d]^-_G
(^(G/H,k)) @^(->[r]^- ((G/H)) @^(->[r]^- ((G))
the left-hand one for the central localization of (G/H) over the open [G/H]1=G/H, and the right-hand one for the tt-functor Ψ^H(G)→(G/H).
Note that the bottom-right map is injective because the ring homomorphism in question, Ψ^H(G)→(G/H) defined in (<ref>), is surjective by <Ref>.
§ THE ELEMENTARY ABELIAN CASE
In this central section, we apply the general constructions of <Ref> in the case of G=E elementary abelian.
We start with a key fact that is obviously wrong in general (for a non-cyclic simple group, the target space is just a point).
Let E be an elementary abelian group.
The comparison map
_E((E))→((E))
of <Ref> is injective.
Let H,N≤ E with [E:N]=p. Suppose first that H≰N.
We use the map ψ̌^H=(Ψ̌^H)E/H→ of <Ref>.
Then
[ (ψ̌^H)((b_N)) = (ψ̌^H)(((b_N))) by definition, see (<ref>); = ((Ψ̌^H(b_N))) by general tt-geometry; = ((0→)) by <Ref>; = (⊕[1]) = ∅. ]
Thus (ψ̌^H) does not meet (b_N) when H≰N.
Suppose now that H≤ N. A similar computation as above shows that (ψ̌^H)((b_N))=(((E/H))) since in that case Ψ̌^H(b_N) is an isomorphism in ((E/H)). Therefore (ψ̌^H)⊆(b_N) when H≤ N. Combining both observations, we have
(ψ̌^H)∩(b_N)≠∅ ⟺ H≤ N.
Let now ,∈((E)) be such that _E()=_E() in ((E)).
Say =_E(H,) and =_E(K,) for H,K≤ E and ∈E/H and ∈E/K.
(See <Ref>.)
The assumption _E()=_E() implies that ∈(f) if and only if ∈(f), for every f∈(E). In particular applying this to f=b_N, we see that for every index-p subgroup N E we have ∈(b_N) if and only if ∈(b_N).
By (<ref>), this means that for every N∈(E) we have
H≤ N ⟺ K≤ N.
Since E is elementary abelian, this forces H=K. So we have two points ,∈E/H that go to the same image under E/Hψ̌^H((E))_E((E)) but we know that this map in injective by <Ref> for G=E.
In fact, we see that the open H of defined in <Ref> matches perfectly the open () of ((E)) in <Ref>.
Let E be an elementary abelian p-group. Let H≤ E be a subgroup. Then the comparison map of <Ref> restricts to a homeomorphism
_EH()
where is the -graded endomorphism ring of the unit in the localization _E(H) of (E) over the open H.
Recall the tt-category (H)=_E(H):=(E)[S_H] of <Ref>, where S_H⊂(E) is the multiplicative subset generated by the homogeneous elements a_NH≰N∪b_NH≤ N of <Ref>.
In view of <Ref>, it suffices to show that the map _(H)((H))→() is a homeomorphism.
We have injectivity by <Ref>. We also know that is noetherian by <Ref>. It follows from <cit.> that _(H) is surjective. Hence it is a continuous bijection and we only need to prove that it is a closed map.
We claim that (H) is generated by its ⊗-unit . Namely, let =_(H)() be the thick subcategory of (H) generated by and let us see that =(H).
Observe that is a sub-tt-category of (H).
Let N∈ be an index-p subgroup. We claim that (E/N) belongs to .
If N≱H, then a_N is inverted in (H), so (E/N)=0 in (H) by <Ref> <ref>.
If N≥ H, then b_N→ u_N[-2'] is inverted, so u_N∈ and we conclude again by <Ref> <ref> since a_N→ u_N is now a morphism in . For a general proper subgroup K<E, the module (E/K) is a tensor product of (E/N) for some N∈. (Here we use E elementary abelian again.) Hence (E/K) also belongs to as the latter is a sub-tt-category of (H).
In short contains all generators (E/H) for H≤ E. Therefore (H)= is indeed generated by its unit.
It follows from this and from noetherianity of ^_(H)()= that ^_(H)(x,y) is a finitely generated -module for every x,y∈(H).
We conclude from a general tt-geometric fact, observed by Lau <cit.>, that the map must then be closed.
Let E be an elementary abelian p-group.
Let E be the sheaf of -graded rings on obtained by sheafifying U↦^_(E)U().
Then (,E) is a Dirac scheme in the sense of <cit.>.
We identified an affine cover {H}_H≤ E in <Ref>.
This result further justifies the notation for the ring E(H) in <Ref>.
Indeed, this (H) is also the ring of sections E(H) of the -graded structure sheaf over the open H of <Ref>.
Let E be an elementary abelian p-group.
Then the comparison map of <Ref> is an open immersion. More precisely, it defines a homeomorphism between ((E)) and the following open subspace of ((E)):
∈((E))for all N E of index p either a_N∉ or b_N∉.-1em
By <Ref>, the (continuous) comparison map is injective.
Therefore, it being an open immersion can be checked locally on the domain.
By <Ref>, the open H form an open cover of ((E)). <Ref> tells us that each H is homeomorphic to the following open of ((E))
U'(H):=⋂_N≱HZ(a_N)^c∩⋂_N≥ HZ(b_N)^c
(recall that Z(f)^c=f∉ is our notation for a principal open).
So it suffices to verify that the union ∪_H≤ E U'(H), is the open subspace of the statement (<ref>).
Let ∈ U'(H) for some H≤ E and let N∈(E); then clearly either N≱H in which case a_N∉, or N≥ H in which case b_N∉.
Conversely let belong to the open (<ref>) and define H=∩_M∈ s.t. b_M∉M.
We claim that ∈ U'(H). Let N∈. If N≱H then b_N∈ by construction of H and therefore a_N∉.
So the last thing we need to prove is that N≥ H implies b_N∉. One should be slightly careful here, as H was defined as the intersection of the M∈ such that b_M∉, and certainly such M's will contain H, but we need to see why every N≥ H satisfies b_N∉.
This last fact follows from <Ref> applied to E/H.
Consider the spectrum of (C_p) for the cyclic group C_p of order p.
By <Ref>, the reduced ring C_p(1)_ is k[ζ^+] with ζ^+=a/b in degree 2' while C_p(C_p)_=k[ζ^-] with ζ^-=b/a.
(The former is also <Ref>.)
Each of these has homogeneous spectrum the Sierpiński space and we easily deduce that
((C_p))= @R=1em@C=.5em ∙@-@[Brown][rd]_-C_p ∙
.6-0.78em@-@[ForestGreen][ru]_-1
confirming the computation of ((C_p^n)) in <cit.> for n=1.
We can also view this as an instance of <Ref>.
Namely, still by <Ref>, the reduced ring (C_p)_ is [a,b] with a in degree 0 and b in degree -2'.
Its homogeneous spectrum has one more point at the top:
((C_p))= @R=1em@C=.5em@R=.5em ∙@-[ld] @-[rd]
∙@-@[Brown][rd]_-Z(a)^c ∙
.6-0.78em@-@[ForestGreen][ru]_-Z(b)^c
and this superfluous closed point ⟨ a,b⟩ lies outside of the open subspace (<ref>).
Let K≤ H≤ E.
The functor Ψ^K(E)→(E/K) passes, by <Ref>, to the localizations over [E]H and [E/K]H/K, respectively.
On the -graded endomorphisms rings, we get a homomorphism Ψ^K→E/K(H/K) that on generators a_N,b_N is given by the formulas of <Ref>.
By <Ref> this homomorphism Ψ^K→E/K(H/K) is surjective.
For every elementary abelian group E, the spectrum admits a unique generic point η_E, namely the one of the cohomological open E.
We proceed by induction on the p-rank.
Let us write η_E=_E(1,√(0)) for the generic point of E, corresponding to the ideal √(0) of nilpotent elements in ^(E;).
Similarly, for every K≤ E, let us write η_E(K)=_E(K,η_E/K) for the generic point of the stratum E(K)≃E/K.
We need to prove that every point η_E(K) belongs to the closure of η_E=η_E(1) in .
It suffices to show this for every cyclic subgroup H<E, by an easy induction argument on the rank, using the fact that ψ^H((E/H)) is closed.
So let H≤ E be cyclic.
Note that inflation ^E/H_E(E/H)→(E) passes to the localization of the former with respect to all b_N/H for all N∈(E) containing H (which is just the derived category of E/H) and of the latter with respect to the corresponding b_N:
^E/H_E ((E/H))
(E)[b_NN≥ H].
This being a central localization of a fully-faithful functor with respect to a multiplicative subset in the source, it remains fully-faithful. One can further localize both categories with respect to all non-nilpotent f∈^(E/H;) in the source, to obtain a fully-faithful
^E/H_E ((E/H))[ff∉√(0)]
where is obtained from (E) by first inverting all b_N for N≥ H as in (<ref>) and then inverting all ^E/H_E(f) for f∈^(E/H;)√(0).
At the level of spectra, () is a subspace of . By construction, it meets the closed subset (ψ^H)≅((E/H)) of only at the image of the generic point η_E(H). Indeed, inverting all b_N for N≥ H on (ψ^H) corresponds to inverting all b_N/H in (E/H), hence shows that ()∩(ψ^H) is in the image under ψ^H of the cohomological open E/H. Similarly, inverting all f∉√(0) removes all non-generic points of E/H. In particular, the generic point η_E(H) of E(H) is now a closed point of the subspace () of .
Using that (<ref>) is fully-faithful and that the endomorphism ring of the source is the cohomology of E/H localized at its generic point (in particular not a product of two rings), we see that is not a product of two tt-categories and therefore () is not disconnected.
Also η_E belongs to () and is distinct from η_E(H).
Hence the closed point η_E(H)∈() cannot be isolated.
Thus η_E(H) belongs to the closure of some other point in ().
Let then ∈ be a point in the subspace (), such that ≠η_E(H) and η_E(H)∈, which reads ⊊η_E(H).
We know by <cit.> that this can only occur for =(H',) with H'≤ H, that is, either H'=H or H'=1 since here H was taken cyclic.
The case H'=H is excluded, as in the subspace () the only prime of the form (H,) that remained was η_E(H) itself, and is different from η_E(H). Thus H'=1, which means that ∈E=η_E(1) and we therefore have η_E(H)∈⊆η_E(1) as claimed.
We can now determine the Krull dimension of the spectrum of (E).
Let E be a elementary abelian p-group. Then the Krull dimension of is the p-rank of E.
By <Ref>, the dimension of is the maximum of the dimensions of the open subsets H, for H≤ E.
Each of these spaces has the same generic point η_E (by <Ref>) and a unique closed point (H) by <Ref>
(and the fact that (K)∈H forces K and H to be contained in the same subgroups N∈(G) by <Ref>, which in turn forces K=H because E is elementary abelian).
Using <Ref> we translate the problem into one about the graded ring .
Let η_E=_0⊊_1⊊⋯⊊_n=(H) be a chain of homogeneous prime ideals in .
Note that _n-1 belongs to the open Z(f)^c of () for some f=ζ^+_N, H≤ N, or some f=ζ^-_N, H≰N.
Each of these has non-zero degree so the graded ring [f] is periodic.
We deduce that (()) is the maximum of 1+(R) where R ranges over the ungraded rings R=[f]_(0) for f as above.
The reduced ring R_ is a finitely generated -algebra with irreducible spectrum, hence a domain. Therefore (R)=(R_) is the transcendence degree of the residue field at the unique generic point.
As observed above, this generic point is the same for all H≤ E, namely the generic point of 1=(( E)).
We conclude that ()=((( E))) which is indeed the p-rank of E.
In fact, the proof shows that all closed points (H)∈ have the same codimension (height), namely the p-rank of E.
Thus for E elementary abelian, the Krull dimension of is the same as the Krull dimension of the classical cohomological open (( E))≅(^(E,)).
In other words, the spectrum of (E) is not monstrously different from that of ( E), at least in terms of dimension, or `vertical complexity'.
There is however `horizontal complexity' in : each H has its own shape and form, and there are as many H as there are subgroups H≤ E.
We give a finite presentation of the corresponding -algebras in <Ref>.
§ CLOSURE IN ELEMENTARY ABELIAN CASE
In this section, E is still an elementary abelian p-group. Following up on <Ref>, we can now use <Ref> to analyze inclusion of tt-primes , in (E), which amounts to asking when belongs to in .
Using again that every ψ^H((E/H)) is a closed immersion, induction on the p-rank easily reduces the above type of questions to the case where the `lower' point belongs to [E]1=E.
More generally, given a closed piece Z of the cohomological open E, we consider its closure Z̅ in =⊔_H≤ EE(H) and we want to describe the part Z̅∩E(H) in each stratum E(H)≅E/H for H≤ E.
Let H≤ E be a subgroup of our elementary abelian group E. Consider the open subsets [E]H of <Ref>, the cohomological open [E]1=E and their intersection [E]H∩E. Consider also the stratum E(H)=ψ̌^H(E/H), that is a closed subset of [E]H homeomorphic to E/H via ψ̌^H:
@C=2em [E]H E
E/H@^(->[ru]_-ψ̌^H [E]H∩E@_(->[lu] @^(->[ru]
On graded endomorphism rings of the unit (<Ref>) this corresponds to
@C=1em @->>[ld]_-Ψ^H[rd]_(.4)Q ^(E;) [ld]^(.4)Q'
^(E/H;)
-2em ([E]H∩E)-2em
where Q is the localization of with respect to ζ^-_N=b_N/a_N for all N≱H, where Q' is the localization of E(1)=^(E,) with respect to ζ^+_N=a_N/b_N for all N≱H and where Ψ^H is the epimorphism of <Ref> for K=H.
With above notation, let I⊆^(E,) be a homogeneous ideal of the cohomology of E.
Define the homogeneous ideal J=Ψ^H(Q(Q'(I))) in the cohomology ^(E/H;) of E/H by `carrying around' the ideal I along (<ref>):
@C=1em -1em Q(Q'(I)) -1em @|->[ld]
I @|->[ld]
-2em J:=Ψ^H(Q(Q'(I))) -4em
Q'(I) @|->[lu]
Let Z be the closed subset of E defined by the ideal I.
Then the closed subset of E/H defined by J is exactly the intersection Z̅∩E(H) of the closure Z̅ of Z in ((E)) with the subspace E/H, embedded via ψ̌^H.
Once translated by <Ref>, it is a general result about the multi-graded ring A=(E). We have two open subsets, H=∩_s∈ S_HZ(s)^c and E=1=∩_s∈ S_1Z(s)^c for the multiplicative subsets S_H and S_1 of <Ref>.
These open subsets are `Dirac-affine', meaning they correspond to the homogenous spectra of the -graded localizations S_H(A)= and S_1 (A)=E(1)=^(E;), where (-) refers to `zero-twist', as before. The intersection of those two affine opens corresponds to inverting both S_H and S_1, that is, inverting b_N/a_NN≱H from and a_N/b_NN≱H from ^(E;).
This explains the two localizations Q and Q' and why their targets coincide.
The intersection H∩Z̅ coincides with the closure in H of H∩ Z.
The latter is a closed subset of H∩E defined by the ideal Q'(I). The preimage ideal Q(Q'(I)) then defines that closure H∩Z̅ in H.
Finally, to further intersect this closed subset of H with the closed subset E/H=((Ψ^H)), it suffices to project the defining ideal along the corresponding epimorphism Ψ^H^(E/H;).
Before illustrating this method, we need a technical detour via polynomials.
Let I be a homogeneous ideal of the cohomology ^(E,) and let 1≠ H≨ E be a fixed non-trivial subgroup.
Suppose that the only homogeneous ideal containing I and all the elements ζ_N for N≥ H (<Ref>) is the maximal ideal ^+(E,).
Then there exists in I a homogeneous ([ The grading is the usual -grading in which all the ζ_N have the same degree 2'. In particular, the first term ∏_M≱Hζ_M^d in f has degree 2'· d· |M∈M≱H|.]) polynomial f of the form
f = ∏_M≱Hζ_M^d + ∑_mλ_m ·∏_N∈ζ_N^m(N)
for some integer d≥ 1 and scalars λ_m∈ and finitely many exponents m∈^ that satisfy the following properties:
m(N)≥ 1 for at least one N≥ H
and
m(N')<d for all N'≱H.
For simplicity, we work in the subring ^∗⊆^(E,) generated by the ζ_N. (For p=2, this is the whole cohomology anyway and for p odd we only miss nilpotent elements, which are mostly irrelevant for the problem, as we can always raise everything in sight to a large p-th power.)
Let us denote the maximal ideal by =ζ_N| N∈.
It is also convenient to work in the quotient -graded ring
A^∗:=^∗(E,)/I
which is generated, as a -algebra, by the classes ζ̅_N of all ζ_N modulo I.
The assumption about Z(I+ζ_NN≥ H)={} implies that has some power contained in I+ζ_NN≥ H.
In other words when N'≱H we have
(ζ̅_N')^d∈ζ̅_N| N≥ H_A^∗
for d≫1 that we take large enough to work for all the (finitely many) N'≱H.
Consider this ideal J= ζ̅_N| N≥ H of A^∗ more carefully. It is a -linear subspace generated by the classes θ̅_m of the following products in ^*
θ_m:=∏_N∈(ζ_N)^m(N)
with m∈^ such that m(N)≥ 1 for at least one N≥ H.
We claim that J is in fact generated over by the subset of the θ̅_m for the special m∈^ satisfying (<ref>).
Indeed, let J'⊆ J be the -subspace generated by the θ̅_m for the special m.
Then we can prove that the class θ̅_m of each product (<ref>) belongs to J', by using (<ref>) and (descending) induction on the number ∑_N≥ Hm(N).
We conclude that J=J'.
By (<ref>), the monomial ∏_M≱H(ζ̅_M)^d belongs to J and therefore to J': It is a -linear combination of monomials of the form θ̅_m for m∈^ satisfying (<ref>). Returning from A^∗=^∗(E,)/I to ^∗(E,), the difference between ∏_M≱H(ζ_M)^d and the same -linear combination of the lifts θ_m in ^∗(E,) is an element of I, that we callf and that fulfills the statement.
Let Z⊂E be a non-empty closed subset of the cohomological open and let 1≠ H≨ E be a non-trivial subgroup.
Suppose that in E, the subset Z intersects the image of the cohomological open of H in the smallest possible way:
Z∩ρ_H(H) = {(1)}.
Consider the closure Z̅ of Z in the whole spectrum .
Then Z̅ does not intersect the stratum E(H)=ψ^H(E/H).
Hence (H) does not belong to Z̅.
Let I⊂^(E,) be the homogeneous ideal that defines Z. The closed image ρ_H(H) is given by the (partly redundant) equations ζ_N=0 for all N≥ H. It follows that the intersection Z∩ρ_H(H) is defined by the ideal I+ ζ_N| N≥ H.
So our hypothesis translates exactly in saying that I satisfies the hypothesis of <Ref>. Hence there exists a homogeneous element of I
f = ∏_M≱Hζ_M^d + ∑_mλ_m ∏_N∈ζ_N^m(N)
for scalars λ_m∈ and finitely many exponents m∈^ satisfying (<ref>).
We can now use <Ref> and follow Diagram (<ref>) with the ideal I and particularly with its element f.
The element Q'(f) is just f seen in ([E]H∩E). But it does not belong to the image of under Q because f contains some b_M with M≱H in denominators in the ζ_M's.
Still, we can multiply Q'(f) by ∏_M≱H(b_M/a_M)^d=∏_M≱H(ζ_M)^-d to get a degree-zero homogeneous element
f̃=1 + ∑_mλ_m ∏_N∈ζ_N^m'(N)
in the ideal Q'(f), where we set the exponent m'(N):=m(N)-d if N≱H and m'(N):=m(N) if N≥ H.
Note that by (<ref>) the exponent m'(N) is negative if N≱H and is non-negative if N≥ H and strictly positive for at least one N≥ H.
Both types of exponents of ζ_N are allowed in , namely, when N≱H, the element ζ^-_N=b_N/a_N exists in .
In other words, the element f̃∈Q'(f) satisfies
f̃ = Q(1 + g̃)
where g̃∈ belongs to the ideal ζ_N| N≥ H in and must be of degree zero by homogeneity.
Now, for N≥ H, we have Ψ^H(ζ_N)=ζ_N/H by <Ref>.
It follows that Ψ^H(g̃) belongs to the maximal ideal ζ_N̅|N̅∈(E/H)⊆^+(E/H,) of ^(E/H,) and still has degree zero. This forces Ψ^H(g̃)=0 and therefore Ψ^H(1+g̃)=1 in ^(E/H,).
In the notation of <Ref>, we have shown that J contains 1, which implies that Z̅∩E(H)=∅.
Let Z⊂E be a closed subset of the cohomological open, strictly larger than the unique closed point (1) of E. Suppose that in E, the subset Z intersects the images of all proper subgroups trivially, Z∩(⋃_H≨ Eρ_H(H)) = {(1)}.
Then the closure Z̅ of Z in the whole spectrum has only one more point, namely Z̅=Z∪{(E)}.
By <Ref>, we see that Z̅ does not meet any stratum E(H) for H≠ E. Thus the only point of outside of Z itself, hence outside of E, that remains candidate to belong to Z̅ must belong to ((E))∪_H≨ EE(H)=E(E)={(E)}. We know that (E)=(E/H)| H≨ E in (E), by <cit.>. Take ∈ Z different from (1). Since does not belong to any (ρ_H)=((E/H)) by assumption, it must contain (E/H). Consequently, (E)⊆, meaning that (E)∈⊆Z̅.
Let E be an elementary abelian group of rank r. Let be a point of height r-1 in the cohomological open E, that is, a closed point of the classical projective support variety 𝒱_E():=E{(1)}≅(^(E,))≅^r-1_.
Suppose that does not belong to the image ρ_H(H) of the support variety of any proper subgroups H≨ E.
Then the closure of in is exactly the following
{}={(E),,(1)}.
Apply <Ref> to Z={,(1)}, the closure of in E.
We can review the proof of <Ref> in the special case of <Ref>, to see how elements like f∈(1) and f̃∈ come into play.
We do it in the special case where is a -rational point ( if is algebraically closed).
Let 1≠ H≨ E be a non-trivial subgroup (the case r=1 being trivial).
Choose N_0,N_1 E index-p subgroups with H≤ N_0 and H≰N_1.
They define coordinates ζ_0,ζ_1 in ^r-1 (where ζ_i=ζ_N_i as in <Ref>).
There exists a hyperplane of ^r-1
λ_0ζ_0+λ_1ζ_1=0, [λ_0:λ_1]∈^1(),
going through the rational point .
Note that λ_1≠ 0 as ∉ Z(ζ_0)=ρ_N_0(N_0), by assumption.
As in <Ref>, the following two localizations agree
E(H)[(ζ^-_N)| H≰N]=E(1)[(ζ^+_N)| H≰N]
where N E ranges over the index-p subgroups as usual.
We find a lift
f̃:=λ_0 ζ_0ζ^-_N_1+λ_1 ∈(H)
of the element f=λ_0ζ_0+λ_1ζ_1∈(1) of (<ref>) suitably multiplied by ζ_1=ζ^-_N_1 in the localization (<ref>).
Then we have Ψ^H(ζ^-_N_1)=0 since H≰N_1, by <Ref>, so Ψ^H(f̃)=λ_1∈^× is an isomorphism.
We deduce that (f̃) belongs to (H), which shows that does not specialize to (H).
Let E=C_2× C_2 be the Klein-four group.
Let us justify the description of ((E)) announced in <cit.> in some detail:
@C=.0em@R=.4em(E)∙@-@[Gray][rrdd] @-@[Gray][rrrrdd] @-@[Gray][rrrrrrdd] @ @<.1em>@[Red][rrrrrrrrdd] (N_0)∙@-@[Gray][ldd] @-@<.1em>@[Gray][rrrrrrdd]
(N_1)∙@-@[Gray][ldd] @-@[Gray][rrrrrrdd]
(N_∞)∙@-@[Gray][ldd] @-@[Gray][rrrrrrdd]
(1)∙@ @[Gray][ldd] @-@[Gray][dd] @-@[Gray][rrdd] @-@[Gray][rrrrdd]
η_E(N_0)∙@-@<-.4em>@[Gray][rrrrrrrdd]
η_E(N_1)∙@-@<-.1em>@[Gray][rrrrrdd]
η_E(N_∞)∙@-@[Gray][rrrdd]
@.@[RoyalBlue][r]
@ @[Gray][rdd] @.@[RoyalBlue][rrrrrrr]
0∙@-@[Gray][dd]
1∙@-@[Gray][lldd]
∞∙@-@[Gray][lllldd]
∙_η_E-1em
-1em
By <Ref>, we have a partition of the spectrum as a set
=E(E) ⊔ E(N_0) ⊔ E(N_1) ⊔ E(N_∞) ⊔ E,
where we write N_0,N_1,N_∞ for the three cyclic subgroups C_2 and where E=E(1) is the cohomological open as usual. Let us review those five parts E(H)=ψ^H(E/H) separately, in growing order of complexity, from left to right in (<ref>).
For H=E, the stratum E(E)=ψ^E(E/E)={(E)} is just a closed point.
For each cyclic subgroup N_i<E, the quotient E/N_i≃ C_2 is cyclic, so (E/N_i) is the space of <Ref>.
Its image under ψ^N_i is {_E(E),η_E(N_i),_E(N_i)}, defining the (brown) point η_E(N_i):=ψ^N_i(η_E/N_i), as in the proof of <Ref>.
The stratum E(N_i) is the image of the cohomological open E/N_i only, that is, the Sierpiński space {η_E(N_i),(N_i)}, whose non-closed point η_E(N_i) is the generic point of the irreducible {_E(E),η_E(N_i),_E(N_i)} in .
Finally, for H=1, the cohomological open E=(( E))≅([ζ_0,ζ_1]) is a ^1_ with a closed point (1) on top.
We denote by η_E the generic point of as in <Ref> and by 0,1,∞ the three _2-rational points of ^1_ (in green).
The notation refers to all remaining points of ^1_.
The undulated lines indicate that all points of have the same behavior.
Namely, η_E specializes to all points of and every point of specializes to (1) and the (red) undulated line towards (E) indicates that all points of specialize to (E), as follows from <Ref>.
(Note that the latter was rather involved: Its proof occupies most of this section, and relies on technical <Ref>.)
We have described the closure of every point in , except for the _2-rational points 0,1,∞.
For this, we use the closed immersion ρ_N_i((N_i)) induced by restriction _N_i.
The point i is the image of the generic point η_N_i of the V-shaped space ((N_i)) of <Ref>. Hence its closure is (ρ_N_i)={(E_i),i,(1)}. So specializations are exactly those of (<ref>).
We revisit this picture in more geometric terms in <Ref>.
It is possible to extend <Ref> to a general finite group G by means of the Colimit <Ref>.
Let Z⊆ be a one-dimensional irreducible closed subset.
Write its generic point as =(K,) for (unique) K∈p(G)_/G and ∈.
By Quillen applied to G̅=, there exists a minimal elementary abelian subgroup E≤G̅ such that ∈(ρ_EE→G̅), also unique up to G̅-conjugation. This E≤G̅=(N_GK)/K is given by E=H/K for H≤ N_G K containing K. Then =φ_(H,K)() where ∈ is given by =_E(1,) for some ∈E.
By <Ref>, the map φ_(H,K)→ is closed and preserves the dimension of points.
It follows that is also the generic point of a one-dimensional irreducible in .
By minimality of E, the point ∈E does not belong to H' for any proper subgroup H'<E.
By <Ref>, we have ={_E(E),,_E(1)} in .
The map φ_(H,K) sends this subset to {_G(H), , _G(K)}.
In summary, every one-dimensional irreducible subset of is of the form Z={(H),,(K)}, where H and K are uniquely determined by the generic point via the above method.
§ PRESENTATION OF TWISTED COHOMOLOGY
We remain in the case of an elementary abelian group E.
In this section we want to better understand the local -graded rings that played such an important role in <Ref>.
Thankfully they are reasonable -algebras.
Recall that we write C_p= σ|σ^p=1 for the cyclic group of order p with a chosen generator σ.
For brevity we call an -linear surjection π E C_p a coordinate.
For two coordinates π,π' we write π∼π' if (π)=(π').
Finally, for a subgroup H, we often abbreviate H|π to mean H≤(π).
Recall from <Ref> and <Ref> that each coordinate π yields an invertible object u_π=π^*u_p in (E).
It comes with maps a_π,b_π,c_π→ u_π[∗].
If π∼π' then there exists a unique λ∈^× such that π'=π^λ.
Hence, if p=2 then necessarily π=π' and u_π=u_π'.
On the other hand, if p>2 is odd then we still have u_π≅ u_π' as already mentioned.
Explicitly, consider the automorphism λ C_p→ C_p that sends σ to σ^λ.
The isomorphism u_π=π^*u_pπ^*λ^*u_p=(π^λ)^*u_p=u_π' will be the pullback π^*Λ along π of an isomorphism of complexes Λ u_pλ^* u_p.
This isomorphism Λ can be given explicitly by the identity in degree 0 and by the C_p-linear maps C_p→λ^* C_p in degree 1 (resp. 2) determined by 1↦ 1 (resp. 1↦ 1+σ+⋯σ^λ-1).
One checks directly that Λ∘ a_p=a_p and Λ∘ b_p=λ· b_p. By applying π^* we obtain
(π^*Λ)∘ a_π=a_λπand
(π^*Λ)∘ b_π=λ· b_λπ.
Given coordinates π_1≁π_2 set π_3=π_1π_2.
Write u_i, a_i and b_i for u_π_i, a_π_i and b_π_i in (E).
Then we have the relation
a_1b_2b_3+b_1a_2b_3+b_1b_2a_3 =0
as a map from to (u_1⊗ u_2⊗ u_3)[-2'· 2] in (E). (See <Ref> for 2'.)
Let N_i=(π_i) for i=1,2,3, which are all distinct.
Let N=N_1∩ N_2∩ N_3 be the common kernel, which is of index p^2 in E.
By inflation along E E/N, it suffices to prove the lemma for E=C_p× C_p and π_1 and π_2 the two projections on the factors.
We abbreviate u for the complex of permutation E-modules u:=u_1⊗ u_2⊗ u_3.
Consider the permutation module
M:=kC_p⊗ kC_p⊗ kC_p≅ k(E/N_1)⊗ k(E/N_2)⊗ k(E/N_3) which appears as a summand in various degrees of the complex u.
One element in M is of particular interest:
m :=∑_i_1,i_2=0^p-1σ^i_1⊗σ^i_2⊗σ^-i_1-i_2.
It is easy to check that m is E-invariant, thus defines a E-linear map m̃ k→ M, that can be used to define the required homotopies. This depends on p.
If p=2, the homotopy is given by m̃ when viewed from to the only M-entry of u[-2] in degree one.
If p>2, the homotopy is given by (m̃,m̃,m̃) as a map from to the three M-entries of u[-4] in degree one.
Verifications are left to the reader.
We construct a commutative -algebra E(H) by generators and relations. Its generators are indexed by coordinates π E C_p (<Ref>)
ζ_π^+π s.t. H≤(π) ∪ ζ_π^-π s.t. H≰(π).
These generators come equipped with a degree in : If the generator ζ^+_π is set to have degree 2', whereas if the generator ζ^-_π is set to have degree -2'.
We impose the following four families of homogeneous relations. First for every coordinate π and every λ∈^× (for p odd), we have a rescaling relation
*
ζ_π^λ^+=λζ^+_π if H|π and ζ_π^λ^-=λ^-1ζ^-_π if H∤π'
and whenever π_3=π_1π_2 and π_1≁π_2, writing ζ_i^±:=ζ_π_i^±, we impose one of the following relations, inspired by <Ref>:
*
ζ_1^++ζ_2^++ζ^+_3=0, if H|π_1 and H|π_2 (and therefore H|π_3)
*
ζ^-_1+ζ^-_2+ζ^-_1ζ^-_2ζ_3^+=0, if H∤π_1 and H∤π_2 but H|π_3
*
ζ^-_1ζ^-_2+ζ^-_2ζ^-_3+ζ^-_3ζ^-_1=0 if H∤π_i for all i=1,2,3.
Since these relations are homogeneous, the ring E(H) is a -graded ring.
We could also define a multi-graded commutative -algebra E generated by all a_π,b_π subject to the relations in (<ref>) and <Ref>.
This algebra E would be ×^-graded with a_π in degree (0,1_(π)) and b_π in degree (-2',1_(π)).
Then E(H) is simply the `zero-twist' part of the localization of E with respect to the a_π,b_π that become invertible in U(H), that is, those a_π such that H∤π and those b_π such that H|π, following the pattern of <Ref>.
By (<ref>) and <Ref>, there exists a canonical homomorphism
E(H)→
mapping ζ^+_π to a_π/b_π and ζ^-_π to b_π/a_π.
Let H=1. Recall from <Ref> that E(1) is the cohomology ring. Then the homomorphism (<ref>) is the standard one E(1)→^(E;), that maps ζ^+_π to the usual generator ζ_π=π^*(ζ_C_p). Note that here H=1|π for all π, so there is no ζ^-_π.
For E elementary abelian, it is well-known that this homomorphism E(1)→^(E;) is an isomorphism modulo nilpotents.
See for instance <cit.>.
For two subgroups H,K≤ E, the open subsets [E]H and [E]K can intersect in . Similarly, we can discuss what happens with the rings E(H).
Let H,K≤ E be two subgroups. Define S=S(H,K)⊂E(H) to be the multiplicative subset generated by the finite set
ζ^+_π for π with H|π and K∤π∪ζ^-_π for π with H∤π and K|π
and similarly, swapping H and K, let T=S(K,H)⊂E(K) be the multiplicative subset generated by ζ^+_πH∤π and K|π∪ζ^-_πH|π and K∤π. Then we have a canonical isomorphism of (periodic) -graded rings
SE(H)≅ TE(K)
and in particular of their degree-zero parts. Thus the open of (E(H)) defined by S is canonically homeomorphic to the open of (E(K)) defined by T.
The left-hand side SE(H) is the (`zero-twist' part of the) localization of the multi-graded ring E of <Ref> with respect to
a_πH∤π∪b_πH|π∪a_πH|π, K∤π∪b_πH∤π, K|π
=a_πH∤π or K∤π∪b_πH|π or K|π
which is symmetric in H and K.
This completes the proof.
The above isomorphism is compatible with the homomorphism (<ref>), namely the obvious diagram commutes when we perform the corresponding localizations on E(H) and E(K).
Let K≤ H≤ E.
There is a canonical split epimorphism `Ψ^KE(H)E/K(H/K) whose kernel is ζ^-_π|K∤π. It is compatible with the homomorphism Ψ^K of <Ref>, in that the following diagram commutes
@C=3em@R=2emE(H) @->>[d]_-`Ψ^K[r]^-(<ref>) E(H) @->>[d]^-Ψ^K
E/K(H/K) [r]^-(<ref>) E/K(H/K).
Set H̅:=H/K≤E̅:=E/K. Similarly, for every coordinate π E C_p such that K|π, let us write π̅ E/K C_p for the induced coordinate.
The morphism `Ψ^K will come from a morphism “Ψ^KE→E/K, with respect to the homomorphism of gradings (<ref>).
As these algebras are constructed by generators and relations (<Ref>), we need to give the image of generators.
Inspired by <Ref> we define “Ψ^KE→E/K on generators by
a_π↦ a_π̅ if K|π
1 if K∤π
b_π↦ b_π̅ if K|π
0 if K∤π.
It is easy to see that the relations in E are preserved; thus the map “Ψ^K is well-defined.
Let ϖ E E/K and for every π̅E̅ C_p consider the coordinate π=π̅∘ϖ E C_p. Then H̅|π̅ if and only if H|π.
It follows that the morphism passes to the localizations `Ψ^KE(H)E/K(H/K) as announced.
The statement about its kernel is easy and commutativity of the square follows from the fact (<Ref>) that Ψ^K treats the a_π and b_π according to the same formulas.
The section of `Ψ^K is inspired by inflation.
Namely, a_π̅↦ a_π and b_π̅↦ b_π defines a map of graded -algebras E̅→E that is already a section to “Ψ^K and passes to the localizations.
The canonical homomorphism (<ref>) induces an isomorphism
E(H)__
of reduced -graded -algebras.
It follows from <Ref> that the map is surjective.
We will now show that the closed immersion ()(E(H)) is surjective—this will complete the proof, by the usual commutative algebra argument, which can be found in <cit.> for the graded case.
By <Ref>, this is equivalent to showing the surjectivity of the composite with _E, that we baptize β^H
@C=3emβ^H -2em [E]H[r]_-_E^-≃ () @^(->[r] (E(H)).
We proceed by induction on the order of the subgroup H. If H=1 the result follows from <Ref>.
So suppose that H≠ 1 and pick a homogeneous prime ∈(E(H)). We distinguish two cases.
If for every coordinate π E C_p such that H∤π we have ζ^-_π∈ then belongs to V(ζ^-_πH∤π), which we identify with the image of (E/H(1)) by <Ref> applied to K=H.
Namely, we have a commutative square
@C=5em@R=2em[E]H[d]_β^H [E/H]1[d]^β^1[l]_ψ^H
(E(H))
(E/H(1))
[l]_(`Ψ^H)
and since the right-hand vertical arrow is surjective by the case already discussed, we conclude that belongs to the image of β^H in (<ref>) as well.
Otherwise, there exists a coordinate π_1 such that H∤π_1 and ζ^-_π_1∉. Let K:=H∩(π_1) and let S=S(H,K) be defined as in <Ref>:
S=ζ^-_πfor π with H∤π and K|π.
We claim that belongs to the open of (E(H)) defined by S.
Indeed, let ζ_π_2^-∈ S, that is for π_2 with H∤π_2 and K|π_2, and let us show that ζ_π_2^-∉.
If π_2∼π_1 this is clear from ζ^-_π_1∉ and the relation <ref> in E(H).
If π_2≁π_1, let h∈ H K (so that h generates the cyclic group H/K≅ C_p).
As π_1(h)≠ 1 and π_2(h)≠ 1 we may replace π_1 by an equivalent coordinate π̃_1:=π_1^λ such that π̃_1(h)=π_2(h) and therefore
H|π_3:=π̃_1π_2.
Then relation <ref> exhibits ζ_π̃_1^- as a multiple of ζ_π_2^-.
As the former does not belong to (by the previous case), neither does ζ_π_2^-.
At this point we may apply <Ref> for our subgroups H and K.
By <Ref>, we have a commutative triangle:
@C=2em@R=2em [E]H∩[E]K[dl]_β^H[dr]^β^K
(E(H)[S])
@<->[rr]^≈ (E(K)[T])
We just proved that belongs to the open subset in the bottom left corner.
As K is a proper subgroup of H, we know that β^K is surjective by induction hypothesis and we conclude that belongs to the image of β^H as well.
In <Ref> we have proved something slightly more precise, namely that the map
E(H)→/⟨ξ^±_π⟩
(where π ranges over all coordinates)
is surjective with nilpotent kernel.
We expect that E(H) is already reduced, which would imply that (<ref>) is in fact an isomorphism of graded rings.
In particular, for p=2 we expect that E(H)E(H).
§ APPLICATIONS AND EXAMPLES
In this final section, we push our techniques further and compute more examples.
For an elementary abelian group E, <Ref> allow us to think of the geometry of , beyond its mere topology, by viewing as a Dirac scheme.
Consider further the `periodic' locus of , which is the open complement of the closed points (H)H≤ E; see <Ref>.
This is analogous to considering the projective support variety (^(E,))≅^r-1_ by removing the `irrelevant ideal' (1)=^+(E,) from (^(E,)). To avoid confusion with the phrase `closed points', we now refer to the (H) as very closed points, allowing us to speak of closed points of ^r-1_ in the usual sense (as we did in <Ref>).
Removing those finitely many `irrelevant' points allows us to draw more geometric pictures by depicting the (usual) closed points of the periodic locus, as in classical algebraic geometry.
In fact, for any finite group G, we can speak of the periodic locus of to mean the open '((G)):=(H)H∈p(G) obtained by removing the `irrelevant' very closed points.
However, we do not endow these spectra with a scheme-theoretic structure beyond the elementary abelian case, since we do not have <Ref> in general.
We postpone a systematic treatment of the periodic locus to later work. For now we focus on examples.
Let us revisit Klein-four, with the notation of <Ref>.
From the picture in (<ref>) we see that the union of the open subsets [E]1 and [E]E only misses (three) very closed points hence covers the periodic locus.
We have
E(1) =[ζ^+_N_0,ζ^+_N_1,ζ^+_N_∞]/⟨ζ^+_N_0+ζ^+_N_1+ζ^+_N_∞⟩ (=^*(E;)),
E(E) =[ζ^-_N_0,ζ^-_N_1,ζ^-_N_∞]/⟨ζ^-_N_0ζ^-_N_1+ζ^-_N_1ζ^-_N_∞+ζ^-_N_∞ζ^-_N_0⟩
and their homogeneous spectra are both a projective line with a unique closed point added.
(For E(E), the coordinate transformation for i=0,1, ζ^-_N_i↦ζ̃^-_i:=ζ^-_N_0+ζ^-_N_∞, identifies the ring with [ζ̃^-_0,ζ̃^-_1,ζ^-_N_∞]/⟨ζ̃^-_0ζ̃^-_1+(ζ^-_N_∞)^2⟩, which corresponds to the image of a degree-two Veronese embedding of ℙ^1 in ℙ^2.)
Removing the very closed points (<Ref>), it is a straightforward exercise to check that the two lines are glued along the open complement of the _2-rational points, according to the rule (ζ^+_N_i)=ζ_N_i^-.
In other words, we obtain the following picture of '((E)):
To translate between this picture and the one in (<ref>), think of the blue part as , the three green points as the _2-rational points i=0,1,∞ in [E]1=E and the brown points as the η_E(N_i) in [E]E.
In view of later applications let us consider the action induced on spectra by the involution on E=C_2× C_2 that interchanges the two C_2-factors.
Let us say that the two factors correspond to the subgroups N_0 and N_1.
On the generators ζ^±_N_i of E(1) and E(E) in (<ref>), the effect of the involution is
ζ^±_N_0↦ζ^±_N_1 ζ^±_N_1↦ζ^±_N_0 ζ^±_N_∞↦ζ^±_N_∞.
The subrings of invariants in E(1) and E(E) are, respectively,
[e_1^+, e_2^+,ζ^+_N_∞]/⟨e_1^++ζ^+_N_∞⟩≅[e_2^+,ζ^+_N_∞]
and [e_1^-,e_2^-,ζ^-_N_∞]/⟨e_1^-ζ^-_N_∞+e_2^-⟩≅[e_1^-,ζ^-_N_∞]
where e_1^±=ζ^±_N_0+ζ^±_N_1 and e_2^±=ζ^±_N_0ζ^±_N_1 are the first and second symmetric polynomials in ζ^±_N_0 and ζ^±_N_1.
Thus e_i^± has degree ± i.
The homogeneous spectra of these rings (with unique very closed point removed) are again two projective lines ([ More precisely, as already in <Ref>, we are dealing with weighted projective spaces which happen to be isomorphic to projective lines.]) and they are glued together along the complement of two points.
In other words, the quotient of '((E)) by the involution is a ℙ^1_ with two doubled points:
Alternatively, the topological space underlying this quotient may be obtained more directly at the level of <Ref>.
Indeed, this involution fixes the two colored points corresponding to ∞, fixes no other points, and swaps the points corresponding to 0 with the points corresponding to 1, respecting the color.
So, again, the quotient can be pictured as a ℙ^1_ with only two doubled points as in <Ref>.
Let us return to general finite groups. We want to optimize the Colimit <Ref> by revisiting the category of elementary abelian p-sections .
In <Ref>, we gave a `raw' version of the morphisms in the indexing category , which could be fine-tuned without changing the colimit (<ref>).
As with any colimit, we can quotient-out the indexing category by identifying any two morphisms that induce the same map by the functor under consideration, here ((-)). We then still have
_(H,K)∈((H/K)).
The same holds for any intermediate quotient pG. For instance if Z(G) denotes the center of G, we can consider the category pG obtained from by modding out the obvious right action of the group Z(G)· H' on each hom set _((H,K),(H',K')).
Let us illustrate how such reductions can be used in practice.
Let G=C_p^n be the cyclic group of order p^n.
As with any abelian group, using Z(G)=G, the reduced category discussed in <Ref> just becomes a poset.
Here, if we denote by 1=H_n<H_n-1<⋯<H_1<H_0=G the tower of subgroups of G then the poset looks as follows:
@C=.3em@R=.7em (H_1,H_0)
⋯ (H_n,H_n-1)
(H_0,H_0)[ru]
(H_1,H_1)[lu][rru]
⋯ (H_n-1,H_n-1)[ru][llu]
(H_n,H_n)[lu]
From <Ref> we deduce that is the colimit of the diagram
@C=.7em@R=.7em V
V
V
∗[ru]
∗[lu][ru]
[lu]
⋯ [ru]
∗[lu]
with ∗=((1)) and V=((C_p)) the V-shaped space in (<ref>). In the above diagram, the arrow to the right (resp. left) captures the left-most (resp. right-most) point of V.
We conclude that the spectrum of (C_p^n) is equal to
@R=1em@C=.7em_0-1em ∙@-@[Gray] '[rd] '[rr] '[drrr]
∙ -1em_1 _n-1-1em ∙ ∙ -1em_n
_1-1em ∙ ⋯ @-@[Gray] '[ru] '[rr] '[rrru] ∙ -1em_n
This example reproves <cit.>. It will provide the starting point for our upcoming work on the tt-geometry of Artin motives over finite fields.
The category of elementary abelian p-sections is a finite EI-category, meaning that all endomorphisms are invertible.
The same is true of its reduced versions pG and in <Ref>.
<Ref> then implies formally that is the quotient of the spectra for the maximal elementary abelian p-sections by the maximal relations.
Let us spell this out.
Let I be a finite EI-category.
The (isomorphism classes of) objects in I inherit a poset structure with x≤ y if _I(x,y)≠∅.
Maximal objects (I)⊆ I are by definition the maximal ones in this poset.
Now, let x_1,x_2 be two objects in I, possibly equal.
The category (x_1,x_2) of spans x_1← y→ x_2 (or `relations') between x_1 and x_2, with obvious morphisms (on the y part, compatible with the spans), is also a finite EI-category and we may consider its maximal objects.
We denote by (G) the set of maximal objects in .
A word of warning: In general, there can be more maximal elementary abelian p-sections than just the elementary abelian p-sections of maximal rank.
Let G be a finite group.
The components φ_(H,K) of (<ref>) induce a homeomorphism between the following coequalizer in topological spaces
coeq(
@C=5em∐_E_1g_1Lg_2E_2-.5cm((L))@<2pt>[r]^-((g_1))@<-5pt>[r]_-((g_2)) ∐_E∈(G)-.5cm)
and , for `maximal relations' in or any variant of <Ref>.
Applying <Ref> we obtain
≃coeq(
@C=5em∐_E_1g_1Lg_2E_2-.5cm((L))@<2pt>[r]^-((g_1))@<-5pt>[r]_-((g_2)) ∐_E∈)
where E ranges over all elementary abelian p-sections and (g_1,g_2) over all relations.
There is a canonical map from the coequalizer in the statement to this one and it is straightforward to produce an inverse, as with any finite EI-category.
We can apply <Ref> to find the irreducible components of .
The set of irreducible components of is in bijection with the set (G) of maximal elementary abelian p-sections of G up to conjugation, via the following bijection with generic points:
(G)_/G ∼⟷^0
(H,K) ⟼φ_(H,K)(η_H/K).
In particular, ()=p-rank_(G) is the sectional p-rank of G.
We use coequalizer (<ref>).
Recall from <Ref> that for an elementary abelian p-group E is always irreducible.
We get immediately that the map (G)_/G^0 is a surjection.
Assume now that φ_E(η_E)=φ_E'(η_E') for E,E'∈(G) and let us show that E and E' are conjugate p-sections.
By <Ref>, there exists a finite sequence of maximal relations responsible for the identity φ_E(η_E)=φ_E'(η_E') and we will treat one relation at a time.
More precisely, assuming that the generic point in ((E_1)) is in the image of (the map on spectra induced by) some relation E_1Lg_2E_2, with E_1,E_2∈(G), we will show below that both g_i are conjugation isomorphisms (type <ref> in <Ref>).
In particular, E_1,E_2 are conjugate.
And as conjugation identifies the unique generic points in the spectra for E_1 and E_2 one can apply induction on the number of relations to conclude.
As the map induced by g_1 is a closed immersion (<Ref>) it must be a homeomorphism once its image contains the generic point.
From this, we deduce that g_1 itself must be an isomorphism.
(Indeed, the map induced by restriction to a proper subgroup of E_1 is not surjective, already on the cohomological open.
And similarly, the map induced by modular fixed-points with respect to a non-trivial subgroup of E_1 does not even meet the cohomological open.)
Hence L≃ E_1 is maximal too and therefore g_2 is also an isomorphism.
The only isomorphisms in are conjugations (<Ref>) and we conclude.
The second statement follows from this together with <Ref>.
For G not elementary abelian, we already saw with the example of G=Q_8 in <cit.> that can have larger Krull dimension than (()).
And indeed, Q_8 has sectional p-rank two and p-rank one.
For every maximal (H,K)∈(G), since φ_(H,K) is a closed map, it yields a surjection φ_(H,K)φ_H,K(η_H/K) from the spectrum of the elementary abelian E=H/K onto the corresponding irreducible component of . We illustrate this with G=D_8 in <Ref> below, where said surjection coincides with the folding of <Ref>.
We will now explain the meaning of <Ref> and in effect compute ((D_8)) for G=D_8=⟨ r,s| r^4=s^2=1, rs=sr ⟩ the dihedral group of order 8.
We label its subgroups as follows ([ The two Klein-four subgroups are called K and K'. The names L_0 and L_1 for the cyclic subgroups of K (resp. L'_0 and L'_1 in K') are chosen to evoke N_0 and N_1 in <Ref>. The third cyclic subgroup, N_∞, corresponds to C_2=Z(D_8) and is common to K and K'.]):
@C=1.5em@R=1em D_8
K_=⟨ r^2,s⟩@-[ru] C_4=⟨ r⟩@-[u] K'_=⟨ r^2,r^3s⟩@-[lu]
L_0=⟨ s⟩@-[ru] L_1=⟨ r^2s⟩@-[u] C_2=⟨ r^2⟩@-[lu]@-[u]@-[ru] L'_0=⟨ rs⟩@-[u] L'_1=⟨ r^3s⟩@-[lu]
1@-[llu]@-[llu]@-[lu]@-[u]@-[ru]@-[rru]
Since L_0 and L_1 (resp. L'_0 and L'_1) are G-conjugate, by the element r, we have exactly eight very closed points (H) for H∈p(G)_/G. We shall focus on the open complement of these very closed points, the periodic locus '((D_8)) of <Ref>, which is of Krull dimension one.
Since all maps in the coequalizer diagram (<ref>) preserve the dimension of points (<Ref>) we may first remove these very closed points and then compute the coequalizer.
Let us describe (D_8) and the maximal relations.
In addition to the maximal elementary abelian subgroups K and K' there is one maximal elementary abelian subquotient D_8/C_2. So we have three maximal sections: (D_8)={(K,1),(K',1),(D_8,C_2)}.
We compute the relations in the category 2D_8 which is obtained from 2D_8 by quotienting each hom-set ((H,M),(H',M')) by the action of H', as in <Ref>.
One then easily finds by inspection five non-degenerate ([ that is, not of the form x𝕀x𝕀x (which would not affect the coequalizer (<ref>) anyway)]) maximal relations up to isomorphism, pictured as follows:
@C=.5em@R=.5em (D_8,C_2)
(K_,C_2)@[OliveGreen][rru]@[Brown][ld] @[OliveGreen][llu](K'_,C_2)@[Brown][rd]
(K_,1)@(l,lu)^r @[OliveGreen][lll](C_2,1)@[OliveGreen][rrr] (K'_,1)@(ru,r)^r
Here, the loops labeled r represent the relations (K_,1)1(K_,1)r(K_,1), and similarly for K'.
All unlabeled arrows are given by 1∈ D_8, as in <Ref> <ref>-<ref>.
We explain below the brown/green color-coding in the other three relations.
Hence the space '((D_8)) is a quotient of three copies of the space '((E)) for E the Klein-four group, equal to ^1_ with three doubled points as in <Ref>.
Let us discuss the relations. We start with the self-relation corresponding to the loop r on (K_,1).
As the conjugation by r on K_ simply swaps the subgroups L_0 and L_1, we deduce from <Ref> that the quotient of '((K_)) by this relation is a ℙ^1_ with two doubled points, as in <Ref>.
The same is true for K'.
At this stage we have identified the three irreducible components (see <Ref>) and the three remaining relations will tell us how to glue these components.
The three sides of the `triangle' (<ref>) display maximal relations that identify a single point of one irreducible component with a single point of another.
Indeed, each of the middle sections K/C_2, K'/C_2 and C_2/1 is a C_2, whose periodic locus is a single point η_C_2 (<Ref>).
Each edge in (<ref>) identifies the image of that single point η_C_2 in the two corresponding irreducible components in <Ref>.
The color in (<ref>) records the color of that image: A brown point or a green point in the ^1_ with doubled points.
Let us do all three.
First, the relation between the two Klein-fours, K and K', at the bottom of (<ref>), identifies the two green points corresponding to C_2, as we are used to with projective support varieties.
Then, the last two relations in (<ref>), on the sides, identify a brown point in the K- or K'-component with the green point in the D_8/C_2-component corresponding to K_/C_2 and K'_/C_2, respectively.
This is a direct verification, for instance using that (ψ^C_2)((ρ^D_8_K))=(ψ^C_2)(_D_8((D_8/K)))=_D_8/C_2(Ψ^C_2((D_8/K)))=_D_8/C_2((D_8/K))=(ρ^D_8/C_2_K/C_2) in ((D_8/C_2)).
Thus we obtain '((D_8)) from these three identifications on the space of <Ref>.
The result was depicted in <Ref>.
alpha
|
http://arxiv.org/abs/2307.04104v1 | 20230709055200 | lcs4Foam -- An OpenFOAM Function Object to Compute Lagrangian Coherent Structures | [
"Constantin Habes",
"Alexandra von Kameke",
"Mohammed Elwardi Fadeli",
"Holger Marschall"
] | physics.flu-dyn | [
"physics.flu-dyn",
"physics.app-ph",
"76-04, 76-10",
"I.6.3; I.6.6; J.2"
] |
lcs4Foam]lcs4Foam – An OpenFOAM Function Object to
Compute Lagrangian Coherent Structures
^1Mathematical Modeling and Analysis, Technical University of Darmstadt, 64287 Darmstadt, Germany
[email protected], [email protected], [email protected]
^2Department of Mechanical Engineering and Production Management, Hamburg University of Applied Sciences, 20099 Hamburg, Germany
[email protected]
To facilitate the understanding and to quantitatively assess the material transport in fluids, a modern characterisation method has emerged in fluid dynamics in the last decades footed in dynamical systems theory. It allows to examine the most influential material lines which are called Lagrangian Coherent Structures (LCS) and order the material transport into dynamically distinct regions at large scales which resist diffusion or mixing. LCS reveal the robust skeleton of material surfaces and are essential to assess material transport in time-dependent flows quantitatively. Candidates of LCS can be estimated and visualised from finite-time stretching and folding fields by calculating the Finite-Time Lyapunov Exponents (FTLE).
In this contribution, we provide an OpenFOAM function object to compute FTLE during CFD simulation. This enables the OpenFOAM community to assess the geometry of the material transport in any flow quantitatively on-the-fly using principally any OpenFOAM flow solver.
[
H. Marschall^1
August 12, 2023
===================
§ INTRODUCTION
Material transport and mixing in fluids is enhanced by advection. This advection is usually described mathematically in an Eulerian view by a time-dependent velocity field 𝐮(𝐮, t). With this Eulerian description, numerous important fluid mechanical characteristics can be derived and assessed. For instance, a higher Reynolds number (higher velocities) typically will go along with better overall mixing. However, such intuition might be misleading as has been shown for example in <cit.> studying a rising bubble. Here, a coherent structure has been found to arise for intermediate Reynolds numbers, which causes material to move together and locally hinders mixing and increases residence times in the vicinity of the bubble rear. The example shows: a closer look at the coherent structures is necessary to evaluate the details of the material transport in the specific flow situation. Lagrangian Coherent Structures (LCS) are often observable in fluid flows due to the shape that passive tracers take on, e.g. plankton in the ocean <cit.> or dissolved oxygen in the wake behind a rising bubble.
That the classical Eulerian view on advection is not optimal for addressing these issues was first noted in oceanography and atmospheric science <cit.>. The transport analysis was therefore started from its roots, the Lagrangian view, where the observer travels on the fluid parcels rather than watching them move by (Eulerian frame). The Lagrangian analysis thus considers the trajectories of individual fluid parcels and allows to draw conclusions on the transport from their evaluation. Nowadays, computational and theoretical advances allow for the calculation and analysis of the time-dependent dynamical system that governs material transport.
The underlying ideas for Lagrangian analysis stem from dynamical systems theory. In time-independent incompressible velocity fields, the dynamical system is the velocity field itself and the streamlines of the velocity field coincide with the trajectories of the fluid parcels. As such, trivially, structures in the velocity fields represent governing structures for the material that is transported (as long as molecular diffusion is comparably low and negligible) <cit.>. In this setting, unstable and stable manifolds divide the flow into different subdomains that move coherently (together) <cit.>.
For time-dependent flows however, the instantaneous streamlines and the trajectories of the fluid parcels do not coincide. It is thus a misleading habit to draw any conclusion about the material transport from the streamlines or any other material lines of the mean velocity field of a fluid flow. The resulting transport structures might have no relevance for the real dynamical system at all.
To obtain the lines that govern material transport in time-dependent flows the Lagrangian Coherent Structures are calculated from the trajectories of particles evaluated in the time-dependent velocity field 𝐮 = 𝐮(𝐱, t). LCS are those material lines and surfaces that separate regions of particles with very different fates or history for the time interval under consideration. Several different approaches to evaluate LCS have been developed during the last years <cit.>.
With this contribution we introduce an OpenFOAM function object that calculates the three dimensional Finite Time Lyapunov Exponents (FTLE) on-the-fly based on the general purpose numerical library libcfd2lcs <cit.> with the main computational details explained in <cit.>. The ridges in the FTLE-field are then candidates for LCS and can be assumed to coincide with LCS if some further conditions are met <cit.>. However, as also pointed out in <cit.>, these additional conditions are hard to evaluate in 3D and thus the FTLE-field will be viewed as an approximate representation of the 3D LCS. The details about the calculation of the FTLE-field and the underlying mathematical foundation are set out in Section <ref>.
§ THEORETICAL BACKGROUND OF LCS CALCULATIONS
From time-resolved CFD simulations, the time-dependent velocity field 𝐮(𝐱, t) is known in space and time. From this information the fluid parcel or passive particle trajectories
𝐱(𝐱_0, t) = 𝐱_0 + ∫_t_0^t𝐮(𝐱(τ), τ) d τ
can be calculated, where 𝐱_0 is the starting point of a trajectory in 3D space at a starting time t_0. Note, that each trajectory is now labelled by its start location in space and time. If a set of initially close passive particles is released at the same time the distances between them change over time due to the fluid motion. Passive particles initially forming a tiny sphere will undergo a linear deformation towards an ellipse for short times as would occur in a solid body under stress before it breaks. Certainly, in a fluid, the deformation will progress, and non-linear higher-order terms will play a role in causing stretching and folding which is crucial for mixing. However, as a first approximation and for short times these higher-order terms are neglected for the analysis of the deformation. If we consider infinitesimal spheres of initially close particles around all mesh cell centres of our simulation starting at the same initial time t_0, we obtain a set of different ellipsoids. All these ellipsoids have differently stretched and contracted principal axes which point in different directions at a slightly later time t_1. The principal axes of each ellipse denominate the final directions of maximal stretching (major axis) and maximal contraction (minor axis) of the initially spherical particle blob. The stretching factor S is the length of the major axis of the final ellipse divided by the initial radius of the sphere. If this stretching factor at each initial grid point is plotted, a 3D stretching field results revealing the regions at which stretching and thus particle separation for the time interval of interest [t_0,t_1] is largest due to the local flow conditions. Normally, the scaled logarithm of this stretching factor, defined by
σ_t_0^t_1(𝐱_0, t_0)=1/|t_1-t_0|log (S) ,
is plotted. This scaled logarithmic stretching factor is called the Finite-Time Lyapunov Exponent <cit.>. Connected areas or lines of large FTLE values characterise the fluid transport as these denote the areas or lines along which deformation and thus particle separation is largest. All these geometrical considerations have their mathematical counterparts. The stretching factor as described is the square root of the maximal eigenvalue of the right Cauchy-Green deformation tensor 𝐂_t_0^t_1. This tensor can be calculated for every mesh cell as envisioned above for the ellipsoid. As its name reveals it includes all the information about the deformation of the fluid masses at this point for the short time interval t_1 -t_0, and notably it is an objective tensor such that high stretching values and candidates for LCS derived from it will persist regardless of the motion of the observer (invariant to a time-dependent translation and rotation of the coordinate system of the observer) <cit.>.
The governing ordinary differential equation (ODE) for the evolution of a fluid parcel or a passive particle reads
𝐱̇=𝐮(𝐱(t), t) .
Therefore, the infinitesimal separation γ = 𝐱-𝐱^* of the passive particle, imagined in the centre of a infinitesimal sphere, to a particle on the surface of the sphere will be governed by the ODE
δ𝐱̇ = ∇𝐮γ .
The solution of this ODE is an exponential function, which explains why the FTLE is defined as the logarithm of the stretching factor.
To analyse the stretching during short but finite time intervals, particles distributed on a mesh are advected with the flow from an initial time t_0 over the time interval T=|t_1-t_0| to t_1. From the integral version of the governing ODE (Eq. <ref>) we obtain the definition of the flow map, Φ_t_0^t_1, which maps all the particles from their initial positions onto their final positions at time t_1, viz.
Φ_t_0^t_1: ℝ^n→ℝ^n ; 𝐱_0↦𝐱_0 + ∫_t_0^t_1𝐮(𝐱(τ), τ) d τ .
To obtain the separation of two initially close particles after this time interval a Taylor series
δ𝐱(t_1) = Φ_t_0^t_1(𝐱_0)-Φ_t_0^t_1(𝐱_0 + δ 𝐱(𝐭_0)) = 𝐃Φ_t_0^t_1(𝐱_0, t_0) δ x(t_0) + 𝒪(| δ x(t_0)^2|)
around the initial position can be employed. Where 𝐃Φ_t_0^t_1(𝐱_0, t_0) is the gradient (Jacobian) of the flow map with regard to the initial separation and is also the normalised fundamental matrix solution of the equation of variations above (Eq. <ref>) <cit.>. Therefore, the particle separation at time t_1 can be written as
δ𝐱(t_1)=√(⟨δ𝐱(t_0),[𝐃Φ_t_0^t_1(𝐱_0, t_0)]^*[𝐃Φ_t_0^t_1(𝐱_0, t_0)] δ𝐱(t_0)⟩).
The right Cauchy-Green deformation tensor is then defined as
𝐂_t_0^t_1(𝐱_0, t_0)=[𝐃Φ_t_0^t_1(𝐱_0, t_0)]^*[𝐃Φ_t_0^t_1(𝐱_0, t_0)] .
In this way the Finite-Time Lyapunov Exponent σ_t_0^t_1 for the time interval t_0 to t_1 can now be defined on the basis of this tensor in a more thorough, mathematical way. Therefore, it is now defined by
σ_t_0^t_1(𝐱_0, t_0)=1/|t_1-t_0|log√(λ_max(𝐂_t_0^t_1(𝐱_0, t_0))) .
Here λ_max is the maximum eigenvalue of the right Cauchy-Green deformation tensor and can be calculated using standard solvers. In the picture of the small ellipsoid, the square root of the eigenvalue is just the above stretching rate S.
§ COMPUTATIONAL DETAILS
The computation of flow maps within libcfd2lcs is described thoroughly in <cit.>. The following section presents a brief overview of how the computation is done in practice and which different timescales play a role in the calculations. Hereafter, we describe the structure and functionality of the newly developed function object. We will focus on how the function object acts as an interface between OpenFOAM and libcfd2lcs, how parallelisation is ensured and what has to be considered for the output of the generated data.
§.§ Numerical flow map computation in libcfd2lcs
libcfd2lcs is able to calculate both forward-time and backward-time FTLE fields. However, it uses two very different approaches for calculating the respective flow maps. The general approach used for the computation of the forward time flow-map Φ_t_0^t_0+T and the resulting forward-time FTLE field is very straightforward. A set of tracer particles is initialised on a grid with spacing Δ x_lcs by setting each initial tracer coordinate to the cell centre coordinate of a corresponding mesh cell. Then the flow map at each cell centre is computed by passively advecting these tracers with the flow, which mathematically corresponds to an integration of equation
d 𝐱/d t=𝐮(𝐱, t)
over the time interval T. Numerically this integration is done by utilising Runge-Kutta methods, with step size Δ t_lcs.
The time and space dependent velocity field 𝐮(𝐱, t) results from the specific fluid simulation under consideration and is passed to libcfd2lcs after each simulation time step Δ t_sim (see Section <ref>). In order to save the flow map Φ_t_0^t_0+T, the location of each particle after the integration is stored at its initial position.
As the evaluation of FTLE fields, indicating LCS candidates, is mainly relevant for time-dependent flows, it is often important to animate their evolution. At first glance, this would mean that a sequence of large particle sets would have to be integrated, requiring a great amount of computation. This problem is solved using a method developed by Brunton and Rowley <cit.>. With this method a flow map of the interval T can be constructed from a sequence of k flow maps over a smaller interval h, where T=kh. Following the notation of <cit.> this can be expressed as
Φ_t_0^t_0+T=Φ_t_0+(k-1)h^t_0+kh∘⋯∘Φ_t_0+h^t_0+2h∘Φ_t_0^t_0+h .
In practical terms, this means that the particle grid is reinitialised for every new time interval h after which they are advected again with the flow. Then the sub-step flow map is stored and the complete flow map is constructed when all needed sub-step flow maps are available. It is important to note that since a discrete particle grid is used for the sub-step flow map computation, interpolation of the sub-step flow maps is needed in order to match the trajectories at different timelevels when reconstructing the flow map Φ_t_0^t_0+T (see <cit.> for more details).
A different approach for constructing the backward-time flow maps is used. This is due to the fact that using the Lagrangian approach would require to store all computed velocity fields in the subset interval h before the integration of the tracers from t_0+h to t_0 could be done backward in time. Although this already includes Brunton's and Rowley's method for the flow map construction, the Lagrangian approach would be "cumbersome and resource intensive" <cit.>. Therefore, libcfd2lcs uses an Eulerian approach for the flow map computation proposed by Leung <cit.>. In contrast to the forward-time flow map, the backward-time flow map Φ_t_0+T^t_0 describes for each grid point where a particle, that is at that point at time t_0+T, originally was at time t_0. With Leung's Eulerian approach this backward-time flow map at time t_0+T is computed by initialising a vector field Ψ(𝐱, t_0) on a grid with the cell centre coordinates at time t_0. The advection of this so called "takeoff coordinate field" in an Eulerian reference frame is then described by the level set equation
∂Ψ(𝐱, t)/∂ t+(𝐮·∇) Ψ(𝐱, t)=0 .
Solving this equation over the time Interval [t_0, t_0+T] in forward time gives Ψ(𝐱, t_0+T), which represents the takeoff coordinates of a Lagrangian particle at t_0 reaching 𝐱 at time t_0+T. Thus, the backward-time flow map Φ_t_0+T^t_0 is equivalent to Ψ(𝐱, t_0+T). libcfd2lcs solves equation (<ref>) by using a semi-Lagrangian advection approach with the time step size
Δ t_lcs = c_cfl Δ x_lcs/𝐮(𝐱, t)
of this procedure being restricted by the CFL condition c_cfl < 1 (see <cit.> and <cit.> for more details).
Furthermore, Brunton's and Rowley's flow map construction method is also applied to the backward-time flow maps computed with the Eulerian method. Hence, the takeoff coordinate field is reinitialised after every sub-step time interval h and the backward-time flow map
Φ_t_0+T^t_0=Φ_t_0+h^t_0∘Φ_t_0+2h^t_0+h∘⋯∘Φ_t_0+kh^t_0+(k-1)h
is constructed form k sub-step backward-time flow maps.
Since a lot of different timescales are relevant in the practical FTLE field computation described above, we try to differentiate and order them in the following, before describing the structure and functionality of the newly developed function object in the next section. The basis of the on-the-fly LCS evaluation is a parallel running simulation that provides the velocity fields. Here, three intervals are of interest (see Fig. <ref>): the overall simulation time that spans from the simulation start time t_sim_start to the simulation end time t_sim_end, the time step size of the simulation Δ t_sim and the write time interval of the simulation results Δ t_sim_write. The computed velocity fields represent a fluid flow for which a reference timescale Δ t_ref can be identified. This reference timescale characterises the dominant hydrodynamic timescale of the flow and is typically larger than the simulation time step size. In order to save computing resources the LCS evaluation of the simulated flow does not necessarily have to start and end at the same time as the simulation. Therefore, a separate start and end time for the LCS evaluation denoted as t_lcs_start and t_lcs_end can be defined (see Fig. <ref>). During the LCS evaluation, a series of FTLE fields are computed. These FTLE fields are calculated from time T flow maps, which themselves are calculated as described earlier in this section. This means storing and constructing the time T flow maps from multiple sub-step flow maps after each LCS sub-step integration interval h. Calculating the sub-step flow maps in turn requires to numerically solve the equations (<ref>) or (<ref>) using the finite time step Δ t_lcs. While Δ t_lcs is set automatically according to equation (<ref>) and a specified CFL number, T and h have to be defined by the user. In order to detect all LCS candidates, T is usually chosen to be larger than Δ t_ref of the investigated flow <cit.>. With the aim of animating the evolution of the FTLE field, h is typically set significantly smaller than Δ t_ref while being in the order of magnitude of Δ t_sim_write.
§.§ Structure and functionality of the function object
In general, function objects can be used to generate additional data at runtime of the simulation. In doing so, function objects can access data generated by the flow solver at runtime, which offers a great advantage over classical post-processing since it can only utilise the stored fields or logged information. The newly developed function object incorporates the functionalities of libcfd2lcs into OpenFOAM at runtime while acting as an interface between both. This is achieved by processing the data generated by OpenFOAM and the subsequent exchange of this data via the libcfd2lcs API (see <cit.> for a detailed description of the libcfd2lcs API).
The calculation of the flow maps, the calculation of the resulting FTLE fields and the subsequent saving of these fields is completely handled by libcfd2lcs. The basic task of the function object is to pass the cell centre position vectors of the computational grid as well as the velocity field calculated by OpenFOAM to libcfd2lcs. Due to the very strict data structure requirements of libcfd2lcs this is not a trivial task. libcfd2lcs can only use static rectlinear grids for the calculation of forward-time and backward-time flow maps and therefore needs the velocity fields on these grids. This means that the mesh and velocity data has to be globally organised in an (i, j, k) structured format <cit.>. Since the LCS evaluation should also be available for simulations on moving grids with general topology and adaptive grid refinement, the function object offers several possibilities to deal with this problem.
In the simplest case, where the simulation mesh is already a static rectlinear mesh, the function object does not need to process the grid and velocity data, but can directly transfer it to libcfd2lcs as basic C++ arrays. This is the preferred method when the flow domain can be represented by a static rectlinear mesh and e.g. immersed boundary methods are used. If a moving mesh, a mesh of general topology or adaptive mesh refinement is used for the simulation a different approach is needed in order to prepare the data for its use in libcfd2lcs. Here, an additional static rectlinear mesh needs to be constructed in the preprocessing step, which can be done e.g. by using the utility. This mesh has to contain the region for which the LCS diagnostic should be performed, meaning that it can cover the whole simulation domain as well as only a part of it. However, since libcfd2lcs also requires boundary conditions for the FTLE field calculations, the boundary patches of the additional LCS mesh must be set accordingly. The user can choose between , , , and the generic patch types which the function objects translates into the corresponding libcfd2lcs boundary types. Then, during runtime, the velocity fields are mapped from the simulation mesh of general topology to the static rectlinear LCS mesh, from which the data can again be transferred to libcfd2lcs as basic C++ arrays. Although this implies that interpolation errors are made during the mapping process, the LCS evaluation is hardly affected by this. Haller showed in <cit.> that LCS are very robust against errors in the velocity field. Also, the additional computational overhead due to the mapping can be neglected compared to the overhead caused by the flow map computations. The function object also implements a third approach in which no additional LCS mesh is needed. This approach utilises the ability to construct complex, moving mesh geometries out of simple unconnected mesh regions in OpenFOAM with the approach. Using this approach the function object can utilise any specified static rectlinear mesh region of the for the LCS evaluation, meaning that the background mesh as well as any other static rectlinear mesh region can be used. In doing so, the function object extracts the mesh and velocity data from the specified mesh region of the and passes it to libcfd2lcs analogously to the previous approaches. Here the type patches are generally passed on as inlet or outlet, as they are treated the same by libcfd2lcs.
As libcfd2lcs also uses the domain decomposition approach and MPI for the parallelisation of the computations, the integration within the parallelisation of OpenFOAM is done in a straightforward manner. The local subdomains of the rectlinear LCS mesh and its velocity data are passed to libcfd2lcs together with an offset, which describes the position of the cell data in the globally (i, j, k) structured data array (see Fig. <ref>). For the MPI communication, the same MPI communicator as used for OpenFOAM is shared with libcfd2lcs. Therefore, the function object can be used for simulations running in parallel or serial. However, if the approach involving an additional LCS mesh is used, special attention is required for the domain decomposition in the preprocessing step. Here the simulation mesh, as well as the LCS mesh, must be cut along the same surfaces to make sure that the mapping of the velocity fields from one mesh to the other works properly.
As already mentioned, the data output of the flow-map and FTLE field data is completely handled by libcfd2lcs. This is due to the fact that the data output interval defined by h can differ from the solver write interval Δ t_sim_write (see section <ref>). Therefore, the results generated by the function object are not stored in corresponding time directories but in a separate folder in the case directory called . Additionally, a directory named is created inside of which all the sub-step data is stored. All data is stored in the Tecplot ASCII data file format (*.dat) and therefore can be visualised in ParaView when opened with its internal Tecplot reader or other common visualisation programs. In addition to this data, the computational overhead generated by the use of the function object with respect to the actual simulation is also output in the solver log file after each simulation time step. This enables the user to examine the computational costs of the LCS evaluation.
§ EXAMPLES OF USAGE
In this section a few examples are presented which are designed to show the functionality and capabilities of the function object. Therefore, example cases are presented in which only a rectlinear simulation mesh, a separate simulation and LCS mesh and a single are used.
§.§ Steady ABC flow
The Arnold-Beltrami-Childress (ABC) flow is an exact periodic solution of the Euler equations and is often used in the literature to verify LCS calculation methods. Therefore this case is also being reviewed here. The velocity field
𝐮=∇×[-Ψ𝐤+∇×(Φ𝐤)]
of the ABC flow can be described using 2 scalar potentials Ψ and Φ <cit.> which themselves are defined as
Ψ=-[C sin (y)+B cos (x)]
Φ=A[-x cos (z)+y sin (z)]-Ψ .
In (<ref>), 𝐤 can be any unit vector but is commonly chosen to be the vertical unit vector. This leads to the three expressions of the velocity components
u=A sin (z)+C cos (y)
v=B sin (x)+A cos (z)
w=C sin (y)+B cos (x) .
The parameters A, B and C can be freely selected and influence the properties of the ABC flow. In order to create comparability with literature values, A=0.5, B=0.8, C=0.8 is chosen. In order to test the newly developed function object on this flow configuration a dedicated ABC flow OpenFOAM solver was written. This solver does not solve the Euler equations in the usual sense, but sets the velocity components on a given computational mesh according to (<ref>). Due to the periodicity of the flow solution, the dimensions of the computational mesh used in this case setup are specified as x,y,z ∈ [0,2π] with a mesh size of 100×100×100.
Since the described mesh is rectlinear no additional LCS mesh is used. Again for reasons of comparability, a LCS integration time of T=10 s is selected for the LCS evaluation. The results of the LCS evaluation, both in forward- and backward-time, can be seen in Figure <ref>.
In these results the FTLE ridges, which indicate the LCS candidates in the ABC flow, can be seen very clearly. Furthermore, the results agree very well with the results from <cit.>, both qualitatively and quantitatively, which suggests that the new function object calculates the FTLE ridges reliably.
§.§ Time dependent double gyre
Another frequently used flow for the verification of LCS computing algorithms is the time periodic Rayleigh-Bénard convection flow, or often called double gyre, proposed by Solomon and Gollub <cit.>. The velocity field of this flow can be describe by using a stream function ψ
u=-∂ψ/∂ y
v=∂ψ/∂ x .
Here ψ is defined by
ψ(x, y, t)=A sin (π f(x, t)) sin (π y)
with
f(x, t)=a(t) x^2+b(t) x
a(t)=ϵsin (ω t)
b(t)=1-2 ϵsin (ω t)
This leads to the expressions for two-dimensional velocity components
u=-π A sin (π f(x)) cos (π y)
v=π A cos (π f(x)) sin (π y) d f/ d x .
As the name double gyre suggests, this model defines the flow of two two-dimensional gyres enclosed in a rectangle which expand and contract periodically along the x-axis. Therefore, the periodic motion is controlled by ϵ if ϵ≠ 0. Then ϵ describes approximately how far the line separating the gyres moves to the left or right from its centre position <cit.>. Otherwise (ϵ=0), no periodic motion is happening. Furthermore, A specifies the magnitude of the velocity vectors and ω/2π determines the oscillation frequency of the gyres.
Similar to the ABC flow example, a dedicated OpenFOAM solver was written for this case, which sets the velocity field on a given computational mesh according to (<ref>). For comparability, a mesh with the same specifications as in <cit.>,<cit.> and <cit.> was used. It has the dimensions [0,2]×[0,1]×[0,0.1]m and a resolution of 512×256×1 cells. As this mesh is also static and rectlinear no additional LCS mesh was used. For the mathematical model of the flow the parameter values are chosen to be ϵ=0.1, A=0.1 m s^-1 and ω=2π/10 s. Since the oscillation frequency is known, the hydrodynamic time scale can be easily determined by t_ref=2π/ω=10 s. As described in section <ref>, the LCS integration time interval T should be set larger than t_ref. Therefore, it is set to T=1.5· t_ref= 15 s. Figure <ref> shows the forward- and backward-time FTLE fields at t= 15 s of the previously described double gyre flow. Again, the results match very well with the results from <cit.>,<cit.> and <cit.>. This confirms that the function object is able to calculate the correct FTLE fields from velocity fields generated by OpenFOAM.
§.§ Flow around cylinder
As it has already been shown in the previous examples that the function object can calculate the correct FTLE fields from velocity fields provided by OpenFOAM, this example will focus on how to deal with non-rectlinear simulation meshes. For this purpose, a standard flow problem is selected that is very well suited for an LCS evaluation: the flow around an infinitely long cylinder.
The general case setup contains a fluid domain with size [-20,30]×[-20,20]×[-0.5,0.5]m that surrounds a cylinder with diameter D=2m and its centre axis at x=y=0m. The free-stream velocity and the fluids kinematic viscosity are set to 𝐮^ T=(1 0 0)m s^-1 and ν = 0.01m^2s^-1, respectively. This results in a Reynolds number of Re=200 which indicates that vortex shedding behind the cylinder occurs in a barely laminar regime. If we also assume a Strouhal number of St=0.2 at Re=200, the hydrodynamic time scale of this flow is t_ref=D/(u·St)=10s.
Because of the cylinder in the middle of the domain, a computational mesh discretising this domain is no longer rectlinear. Therefore, we consider two different procedures in the LCS evaluation, the first of which is carried out in two different ways.
Starting with the procedure where an additional rectlinear computational mesh is used for the LCS evaluation, the flow domain is discretised with a simulation mesh consisting of 9200 hexahedra (see upper left mesh in Fig. <ref>).
The flow solver that is used to simulate the previously described flow from t=0s to t=120s is with the initial conditions being calculated by . The first additional LCS mesh that is used within this procedure encloses the whole flow domain (see upper right mesh in Fig. <ref>). In order to minimise the loss of information during the mapping of the velocity fields between the two grids, the resolution of the LCS mesh is chosen in a way that it corresponds approximately to the finest resolution in the simulation mesh. This leads to a LCS mesh with 200×160×1 hexahedra. The boundary patch types are set to for the left and right patch (inlet,outlet), to for the bottom and top patch and to for the front and back patch. The LCS integration time T is again based on t_ref and is set to T=1.5· t_ref=15s. For a good animation of the dynamics of the FTLE fields h is chosen to be h=T/10=1.5s. The results of the forward- and backward-time FTLE fields can be seen in Fig. <ref>. They show how the vortices behind the cylinder form large coherent structures, where the FTLE ridges of the backward-time FTLE fields separate different fluid packages that do not mix in the vortex street.
Since the FTLE ridges only appear in a fraction of the overall domain and the LCS evaluation is computation-wise a quite costly operation, a second LCS mesh is prepared. This second LCS mesh is a lot smaller than the first one and encloses only the fraction of the flow domain where the FTLE ridges are expected to show up (see Fig. <ref>). The boundary patches on the smaller LCS mesh and its spacial resolution are also set analogous to its bigger counterpart, leading to a LCS mesh of size [-13,27]×[-7.5,7.5]×[-0.5,0.5] containing 160×60×1 hexahedra. Repeating the computations with the use of the smaller LCS mesh gives the results which are displayed in Fig. <ref> and are found to match with the results from the bigger LCS mesh. This shows that the LCS evaluation, when done with a separate LCS mesh, can be used in a very targeted way. The advantages this brings in terms of computational costs are discussed after considering the second procedure for the LCS evaluation of this flow problem.
The second procedure, which can be used on problems where no single static rectlinear mesh can be constructed, utilises OpenFOAM's functionalities. With regard to the flow problem considered here, an is constructed with the same dimensions as the simulation mesh used previously. It consists of three mesh zones, namely a rectlinear background mesh zone that spans the whole fluid domain, another finer and smaller mesh zone that is used for a finer resolution of the flow and a cylindrical mesh that surrounds the cylinder (see Fig. <ref>). For comparability reasons the finer rectlinear mesh zone has the same dimensions and resolution as the smaller additional LCS mesh considered previously and is therefore specified as the cell zone for the LCS evaluation. Also all other LCS evaluation settings are adopted. The only difference to the previously considered simulations is the used flow solver. Here the flow solver is due to the used . The resulting forward- and backward-time FTLE fields of this simulation can be found in Fig. <ref>. They match with the results from the previously considered procedure which shows that both approaches can be used equally well. The only thing that stands out are the high FTLE values along some boundaries in the studied solutions. These occur because of the way libcfd2lcs handles its inlet and outlet boundary conditions. It fixes out-flowing Lagrangian particles/takeoff coordinates on "open" boundaries and cannot generate new in-flowing particles during the flow map computation. Therefore, high FTLE values occur in the forward-time FTLE fields at "open" boundaries where inflow occurs, since there the most "stretching" happens. Vice versa, high FTLE values occur in the backward-time FTLE fields at "open" boundaries where outflow occurs, since there the most "folding" happens. These high values at "open" boundaries are just artefacts and have to be neglected. The reason they appear more in the approach is that all type patches are passed to libcfd2lcs as "open" boundaries whereas the user can specify all patches problem dependent in the additional LCS mesh approach.
Looking at the computation times of the flow calculations including the LCS evaluation, it becomes evident that LCS evaluation is a very costly operation (see Tab. <ref>). When using the "large" additional LCS mesh the simulation takes approximately 30 time longer than without the LCS evaluation. This can be improved by using the smaller additional LCS mesh. Here the simulation takes 9 times longer than without the LCS evaluation. Since the costs for the LCS evaluations are almost independent of the underlying simulation for a constant grid size, this factor becomes smaller and smaller for more complex simulations. This can also be seen from the fact that the factor is only 2.5 when the approach is used because the computations of the pressure and velocity fields take longer on an . At this point, however, it must be emphasised that the flow considered here is not a highly complex problem, which can also be seen from the simulation time of 1.5 min on a normal mesh and 8 min on an .
§ SUMMARY & CONCLUSION
We provide an OpenFOAM function object based on libcfd2lcs to compute Finite-Time Lyapunov Exponent (FTLE) fields that indicate candidates of Lagrangian Coherent Structures (LCS) and allow to visualise finite-time stretching and folding fields. LCS reveal the robust skeleton of material surfaces and are key to quantitatively assess material transport in time-dependent flows. This enables the OpenFOAM community to assess the geometry of the material transport in any flow quantitatively on-the-fly using principally any OpenFOAM flow solver.
Focusing on the practical aspects, we only give a brief overview of the mathematical foundation as well as how the computation is done in practice. We describe the structure and functionality of the newly developed function object. Further focus is laid on how the function object acts as an interface between OpenFOAM and libcfd2lcs, how parallelisation is ensured and what has to be considered for the output of the generated data.
From validation of the presented function object using simple benchmark problems, a notable computational overhead has been recognised. However, if LCS evaluations are used for much more complex problems as the ones used here, the computational overhead significantly drops and the LCS evaluation no longer accounts for the largest proportion of the computation time. Nevertheless, the user should be aware that the calculation of FTLE fields is expensive and should therefore think carefully about the size and position of the LCS mesh. In addition, consideration should also be given to whether both forward and backward-time FTLE calculations are required or if one of them is sufficient.
IEEEtran
|
http://arxiv.org/abs/2307.07492v1 | 20230714172433 | Probing multipartite entanglement through persistent homology | [
"Gregory A. Hamilton",
"Felix Leditzky"
] | quant-ph | [
"quant-ph"
] |
=1
enumi(#2#1#3)
enumi(#2#1#3) and (#2#1#3), (#2#1#3) and (#2#1#3)
enumi(#3#1#4)–(#5#2#6)
[4]
equationsection
theoremTheorem
lemma[theorem]Lemma
proposition[theorem]Proposition
corollary[theorem]Corollary
definition
definition[theorem]Definition
*remarkRemark
cd
nalign
in:library_PHEM.bibstring+doi[1]doi#1https://dx.doi.org/doi#1titlestring+doi#1[article]titlestring+doi#1[incollection]titlestring+doi#1[inproceedings]titlestring+doi#1=10000
1,2,4]Gregory A. Hamilton3,4]Felix Leditzky[1] Department of Physics, University of Illinois Urbana-Champaign[2]Institute for Condensed Matter Theory, University of Illinois Urbana-Champaign[3]Department of Mathematics, University of Illinois Urbana-Champaign[4]Illinois Quantum Information Science and Technology Center (IQUIST),
University of Illinois Urbana-ChampaignProbing multipartite entanglement through persistent homology
[
August 12, 2023
=============================================================
We propose a study of multipartite entanglement through persistent homology, a tool used in topological data analysis. In persistent homology, a 1-parameter filtration of simplicial complexes called persistence complex is used to reveal persistent topological features of the underlying data set.
This is achieved via the computation of homological invariants that can be visualized as a persistence barcode encoding all relevant topological information. In this work, we apply this technique to study multipartite quantum systems by interpreting the individual systems as vertices of a simplicial complex. To construct a persistence complex from a given multipartite quantum state, we use a generalization of the bipartite mutual information called the deformed total correlation. Computing the persistence barcodes of this complex yields a visualization or `topological fingerprint' of the multipartite entanglement in the quantum state. The barcodes can also be used to compute a topological summary called the integrated Euler characteristic of a persistence complex. We show that in our case this integrated Euler characteristic is equal to the deformed interaction information, another multipartite version of mutual information. When choosing the linear entropy as the underlying entropy, this deformed interaction information coincides with the n-tangle, a well-known entanglement measure. The persistence barcodes thus provide more fine-grained information about the entanglement structure than its topological summary, the n-tangle, alone, which we illustrate with examples of pairs of states with identical n-tangle but different barcodes.
Furthermore, a variant of persistent homology computed relative to a fixed subset yields an interesting connection to strong subadditivity and entropy inequalities. We also comment on a possible generalization of our approach to arbitrary resource theories.
§ INTRODUCTION
Entanglement is a strong form of correlation between quantum systems that lies at the heart of quantum information processing, giving rise to novel protocols in quantum algorithms <cit.>, quantum communication <cit.>, quantum computation <cit.>, quantum cryptography <cit.>, and quantum-enhanced sensing <cit.>.
Due to the fundamental role of entanglement as a resource in these tasks, finding a succinct mathematical characterization of entangled quantum states is a central goal in quantum information theory.
In the simplest case of pure bipartite quantum states, the singular value decomposition gives an efficient tool to accomplish this goal.
Unfortunately, the more general cases of mixed quantum states or multipartite states on systems consisting of more than two parts are significantly more challenging.
This is partly due to the fact that the dimension of the state space grows exponentially with the number of constituents, and hence an exponentially large number of parameters is needed to describe states.
As a result, our understanding of entanglement even for pure multipartite quantum states is rudimentary at best.
A common approach to studying multipartite entanglement is using functionals that measure the correlations in entangled states.
The desired properties of such functionals depend on the operational context.
For example, we may consider the setting where the individual parties of a multipartite system can only perform local operations and classical communication (LOCC) between each other.
Since entangled states cannot be created from scratch using LOCC protocols alone, they form a valuable resource in this context <cit.>.
Entanglement measures, functionals that cannot increase on average under LOCC, aim to quantify the usefulness of such a resource <cit.>.
An important example of an entanglement measure in this work is the n-tangle, which was originally defined for three qubits <cit.> and can be generalized to multi-qubit systems <cit.>.
The LOCC setting is well-motivated from an operational point of view, but rather intractable in mathematical terms <cit.>.
One may consider the relaxed setting of stochastic LOCC (SLOCC) consisting of LOCC protocols that achieve a state transformation only with some positive success probability.
Since SLOCC maps can be identified with local operators, this gives rise to a more tractable mathematical treatment <cit.>.
Similar to LOCC, there are measures that quantify entanglement in the SLOCC context.
Particularly interesting are measures that are invariant under invertible SLOCC transformations corresponding to local invertible operators. These measures can be used to distinguish different SLOCC equivalence classes <cit.>.
Another useful way to measure the correlations in quantum systems is via their usefulness or operational relevance in an information-theoretic task.
The guiding principle is to consider an information-processing task that makes use of the correlations in the given quantum state, and showing that the optimal rate of achieving this task is equal to an entropic measure.
For example, the quantum mutual information of a bipartite state quantifies the amount of local noise that has to be applied to the state in order to decorrelate it <cit.>.
In contrast to the entanglement measures mentioned above, the mutual information measures the total amount of correlation between two systems, including quantum as well as classical correlations.
There are different generalizations of the mutual information to multipartite systems; particularly interesting for us is the so-called total correlation <cit.>, which enjoys a similar operational interpretation as the mutual information, as the total amount of local noise needed to fully decorrelate a multipartite state <cit.>.
Another multipartite generalization of the mutual information is the interaction information, with applications in the study of topologically ordered systems <cit.>, scrambling <cit.>, and the AdS/CFT correspondence <cit.>.
§.§ Overview of main results
In this work, we propose a novel approach to characterizing multipartite entanglement based on a tool from topological data analysis called persistent homology.
The key idea is to build up a 1-parameter filtration of abstract simplicial complexes called a persistence complex, defined by measuring the correlations between quantum systems in a multipartite state using a suitable functional.
We choose a q-deformed Tsallis version of the total correlation as this functional.
Persistent homology then amounts to keeping track of the changes in topology as a function of the filtration parameter, thus revealing persistent topological features that characterize the underlying entanglement structure, as explained in Fig. <ref>.
Our main visualization tool is the persistence barcode of a persistence complex, which can be interpreted as a “topological fingerprint” of multipartite entanglement.
We find that the integrated Euler characteristic of the filtration complex induced by the q-deformed total correlation equals the q-deformed interaction information (<Ref>).
Specializing to q=2, which amounts to choosing the linear entropy as the underlying entropy, this integrated Euler characteristic equals the n-tangle, the entanglement monotone mentioned above (<Ref>).
This result answers a question raised by Eltschka and Siewert <cit.> about the potentially topological nature of an entanglement measure called distributed concurrence that can be expressed as a formula reminiscent of an Euler characteristic.
This distributed concurrence coincides (up to an irrelevant factor) with the integrated Euler characteristic of the persistence complex derived from the q-deformed total correlation via <Ref>, thereby establishing the desired topological interpretation.
Another consequence of <Ref> is that persistent homology provides a more fine-grained analysis of the entanglement structure of a multipartite state than the n-tangle.
We illustrate this point by exhibiting examples of SLOCC-inequivalent pairs of states with identical n-tangle values but differing persistent barcodes in Figures <ref> and <ref>.
These examples demonstrate that the topological fingerprint conveys more information about the multipartite entanglement than the n-tangle alone, and we conjecture that they may be used to rigorously detect SLOCC-inequivalence.
Using a modified technique that computes persistent homology relative to a given subset, we then exhibit an interesting link between persistent homology and entropy inequalities: the integrated Euler characteristic of the relative persistence complex of a tripartite system equals the negative conditional mutual information, and is thus non-positive because of strong subadditivity (<Ref>).
Finally, we discuss how our approach can naturally be generalized to other multipartite correlation functionals, in particular generalized divergences satisfying the data processing inequality.
We argue that the persistent homology arising from this choice is a promising tool to study more general resource theories.
§.§ Related work
Persistent homology has previously been used as a tool to study entanglement in <cit.>.
In these works, the individual systems of a multipartite system are treated as data points, and a bipartite correlation measure is used to assign distances between pairs of systems.
In <cit.> the concurrence is used as the bipartite measure, whereas <cit.> uses an inverse mutual information quantity.
The chosen distance measure is used to build the Vietoris-Rips complex, in which a subset of k+1 points forms a k-simplex whenever the distance of any two data points in the subset is at most equal to some fixed parameter .
Varying the parameter then yields a filtration of simplicial complexes, and persistent homology tools are used to analyze the entanglement of the quantum state.
Here, we use a multipartite correlation measure to build the persistence complex, which is explained in more detail in Section <ref>.
The advantage of this approach is that multipartite correlations in k-simplices are more directly encoded in the persistent homology.
This leads to a more refined analysis of the entanglement structure of a quantum state and yields a well-known entanglement measure as a topological summary.
§.§ Structure of this manuscript
The remainder of this manuscript is organized as follows.
In Section <ref> we fix some notation and define the entropic quantities used in this work.
We then review persistent homology and topological summaries in Section <ref>, giving both an intuitive explanation of the key ideas as well as a formal exposition.
In Section <ref> we present our main results:
We first show in Section <ref> that the q-deformed total correlation is a valid functional for the persistent homology pipeline.
In Section <ref> we prove the advertised results on the topological summaries of the persistence complex in terms of the total correlation, and discuss how persistent homology conveys more fine-grained information about the entanglement structure of a quantum state.
In Section <ref> we show how strong subadditivity implies a non-positivity condition of the integrated Euler characteristic.
We conclude in Section <ref> with a summary of our results and future directions of research.
§ PRELIMINARIES
§.§ Notation
Given a finite dimensional Hilbert space we denote by ℬ() the algebra of linear operators acting on . We set |_A|_A, where A is a quantum system, and write X_A_1… A_n for operators in ℬ(_A_1⋯ A_n), where _A_1⋯ A_n_A_1⊗⋯⊗_A_n. We equivalently write X_A_1⋯ A_n≡ X_ for set {A_1,…,A_n}. Note the distinction between |_A| and |A|, the cardinality of a subset A⊆. We denote by 𝒫(_A) {ρ∈ℬ(_A)ρ≥ 0} the set of positive semidefinite operators on _A, and by 𝒟(_A) {ρ∈𝒫(_A)(ρ_A) = 1} the set of quantum states or density operators on _A.
§.§ Entropic measures
For a state ρ_A∈(_A) the Tsallis entropy S_q(ρ_A)≡ S_q(A)_ρ for q∈ (0,1)∪(1,∞) is defined as
S_q(A)_ρ = 1/1-q(ρ_A^q -1),
where the logarithm is taken in base 2.
The quantity S_2(A)_ρ = 1-ρ_A^2 is called the linear entropy, while the von Neumann entropy S(ρ) = -ρlogρ is recovered in the limit S(A)_ρ≡lim_q→ 1S_q(A)_ρ.
We sometimes drop the ρ subscript when the quantum state is clear.
The quantum relative entropy D(ρσ) is defined as
D(ρσ) = (ρ(logρ - logσ)), if ρ⊆σ,
∞ otherwise,
where X denotes the orthogonal complement of the kernel of an operator X.
The relative entropy is an example of a generalized divergence (see <Ref>) as it satisfies the data processing inequality D(ρ_ABσ_AB) ≥ D(ρ_Aσ_A) for all ρ_AB,σ_AB<cit.>.
The mutual information of a bipartite state ρ_AB is defined as
I(A:B)_ρ = S(A)_ρ + S(B)_ρ - S(AB)_ρ = D(ρ_ABρ_A⊗ρ_B).
It has an operational interpretation as the amount of local noise the parties A and B have to apply in order to remove all correlations between each other <cit.>.
There are different generalizations of the mutual information to multipartite systems ={ A_1,…,A_n}.
The total correlation <cit.> is defined as
C()_ρ∑_i=1^n S(A_i)_ρ - S()_ρ = D(ρ_⊗_i=1^nρ_A_i).
It enjoys an operational interpretation similar to the mutual information, equalling the total amount of local noise needed to fully decorrelate a multipartite state <cit.>.
Another multipartite generalization of the mutual information is the interaction information, defined as
I()_ρ∑_J⊆(-1)^|J|-1S(J)_ρ.
Specializing to three parties, the interaction information gives the tripartite information
I({ A,B,C}) = S(A) +S(B)+S(C)-S(AB)-S(AC)-S(BC)+S(ABC),
with applications in the study of many-body physics <cit.>, the AdS/CFT correspondence <cit.>, and secret sharing <cit.>.
§ PERSISTENT HOMOLOGY
§.§ Intuitive picture
In our work we view the parties A_i of a multipartite system = { A_1,…,A_n} as vertices of an abstract simplicial complex, that is, a family of sets closed under taking subsets.
The elements of this simplicial complex are determined by a real-valued functional F 2^→ℝ on subsets of the quantum systems, evaluated using a given quantum state on .
For a fixed value of a “filtration parameter”∈ℝ, we add the simplex J to the complex if F(J)≤.
We require the functional to be monotonic with respect to taking subsets, F(I)≤ F(J) for I⊆ J, such that this indeed defines a valid simplicial complex G().
This yields a filtration of simplicial complexes { G()∈ℝ} called the “persistence complex”, with G()⊆ G(') if ≤'.
Persistent homology provides a means of keeping track of how the topology changes as a function of the filtration parameter <cit.>.
To this end, we consider the homology groups H_k(·) of the simplicial complexes, which intuitively count the number of k-dimensional holes in a topological space.
The inclusion of simplicial complexes G()⊆ G(') for ≤' induces homomorphisms H_k(G())→ H_k(G(')) between homology groups of given dimension k, and the k-th persistent homology group is defined to be the image of H_k(G()) under this homomorphism <cit.>.
A persistent barcode is then a unique collection of intervals represented by stacked lines, one stack for each dimension k<cit.>.
The width of a line in the barcode corresponds to the interval of the filtration parameter for which the corresponding homology group is non-trivial; informally speaking, it visualizes how long clusters or holes in the topological space persist.
The number of barcodes of dimension k spanning filtration value corresponds to the k-th Betti number β_k(), the rank of the k-th homology group at .
We refer to Fig. <ref> for a simple example of a persistence complex together with its barcode that has been derived from a graph state on three qubits.
§.§ Formal definition
Our setting for applying persistent homology to quantum information is a set {A_1,…,A_n} of disjoint quantum systems to which we associate an abstract simplicial complexΔ, which in this work we take as Δ = 2^∖∅ unless specfied otherwise.
Given a state ρ∈𝒟(_) we consider a functional F_ρΔ→ℝ that is monotonic under partial trace:
F(J)_ρ≤ F(K)_ρ if J ≤ K for J,K ∈Δ.
The functional chosen in this work is a deformed version of the total correlation of (<ref>), which we define in Section <ref> below.
We also comment on possible alternative choices in Section <ref>.
For a given a state ρ∈𝒟(_) and functional F_ρΔ→ℝ satisfying the monotonicity condition (<ref>), we define a filtration of simplicial complexes { G()_ρ∈ℝ} by
G(ε)_ρ = {J ∈Δ F(J)_ρ≤ε},
which satisfies G(ε)_ρ≤ G(ε')_ρ for ε≤ε'.
Given a field K (in this work we take K ≅ℤ_2) we compute the simplicial homologyH_k(G(ε)) as follows.
The chain group _k is defined as the group of simplicial k-chains of the form ∑_j=1^nk_j σ_j, where k_j ∈ K and σ_j is an (oriented) k-simplex.
The boundary operator ∂_k_k →_k-1 is a homomorphism acting as ∂_k (σ) = ∑_i=0^k (-1)^i(v_0,…,v̂_i,…,v_k), where v_i are the vertices of G(ε) and v̂_i denotes omission.
We have ∂_k∘∂_k+1 = 0 and hence im ∂_k+1≤∂_k.
The homology group H_k ≅∂_k / im ∂_k+1 identifies the representative chain elements c_k ∈_k that cannot be written as the boundary of a chain element c_k+1∈_k+1.
Its elements are formally equivalence classes of k-chains.
The inclusion mapping at the level of simplicial complexes induces a homomorphism at the level of the homology groups: f_∗^kl H_∗(G(ε_k)) → H_∗(G(ε_l)), satisfying (1) f_∗^kk equates to the identity map and (2) f_∗^km = f_∗^lm∘ f_∗^kl for all k≤ l ≤ m.
This collection of homology groups and homomorphisms indexed by real numbers is termed the persistence moduleP<cit.>.[
This construction can also be understood in terms of category theory: Define an ℝ-indexed sublevel set filtration functor Gℝ→Simp by G(ε)_ρ = {J ∈Δ | F(J)_ρ≤ε}, which satisfies G(ε)_ρ≤ G(ε')_ρ when ε≤ε'.
Here, Simp denotes the category of simplicial complexes.
Denoting by H_k the k-th homology functor with coefficients in a field K, and by Vec the category of vector spaces over K, the functor P_k≡ H_kGℝ→Vec can be identified with the persistence module defined above.]
In particular, P_k(ε)_ρ is the k-th simplicial homology group for the subcomplex G(ε)_ρ.
The k-th Betti number β_k(ε)_ρ for ε∈ℝ is defined as P_k(ε)_ρ.
Under a mild finite-type assumption on the chain groups (which is always satisfied for the complexes considered in this work), the persistence module can be uniquely represented by a finite multiset of intervals in ℝ termed a barcode<cit.>.
This barcode representation is described by a set of tuples {(b,d,k,m)}, where (b,d) ⊆ℝ denotes an interval, k the homological dimension, and m indexing possible degeneracies.
The interval (b,d) functionally corresponds to a pair of simplices (J,K) that create and destroy a homology group, respectively.
Informally, a barcode encodes intervals of ℝ over which homology groups persist.
A barcode can equivalently be represented as a persistence diagramPD_k, which depicts the intervals (b,d) from a fixed homological dimension k as points in ℝ^2.
For a persistence diagram we refer to the abscissa and ordinate as the “birth” and “death” times, respectively.
The distance of a point from the diagonal y=x is proportional to what is termed the lifetimel |d-b|, serving as a crude quantifier of the “importance” of a homology groups: the larger the lifetime, the more “global” a topological feature.
We depict an example barcode of a persistence complex in Fig. <ref>. We note that persistence diagrams often include infinite intervals of the form (b,∞); in practice, these bars are either ignored or addressed via reduced or relative homology, as discussed below.
§.§ Topological Summaries
The methodology formalized here is a “persistent homology pipeline” that maps quantum states to persistence modules P, uniquely represented (for the single-parameter case) as a persistence diagram (measure) PD_i or barcode.
In practical applications, persistence diagrams can be further analyzed using a so-called topological summary (TS) T P →ℝ. This typically involves the integration of a suitable function with respect to the continuous filtration parameter , one important example of which is the integrated Betti number𝔅_k:
𝔅_k(ε) = ∫_0^εβ_k(ε')dε'.
In plain language, 𝔅_k(ε) is the integral of the Betti number β_k() defined in <Ref> up to filtration value ε.
The total persistence is given by the sum of the integrated Betti numbers:
𝔅(ε) ∑_k=0^∞𝔅_k(ε).
As we soon demonstrate, the most interesting topological summary in this work is given by the integrated Euler characteristic (IEC) <cit.>:
𝔛(ε)∑_k=0^∞(-1)^k𝔅_k(ε).
As the name suggests, the IEC is the integration of the Euler characteristic with respect to the filtration.
The quantities 𝔅() and 𝔛() defined above generally diverge in the limit ε→∞ due to the presence of persistence intervals of the form (b,∞).
These bars arise naturally in dimension k=0, as the connected components of Δ persist indefinitely.
To deal with this divergence, we use the reduced persistent homology, obtained by defining an augmented chain complex
…_1_0ℤ→ 0,
such that φ(∑_k=1^mσ_k) = m 2 when choosing ℤ_2 as the field of coefficients.
Essentially, the result of this definition is a redefinition of the Betti numbers as
β_k(ε) →β_k(ε), k > 0
max(β_k(ε)- 1, 0), k = 0.
We use tilde notation to indicate the reduced homology, e.g., 𝔅_k denotes the reduced integrated Betti number, and 𝔛() the reduced integrated Euler characteristic.
The reduced homology is especially useful if F(K)_ρ=0 for K ∈ V(Δ) (i.e., the vertex set V(Δ) appears at ε = 0 in the filtration), which will be the case for our choice of function discussed in Section <ref> below.
In such cases, the removal of the zero-dimensional infinite bar (0,∞) (corresponding to the connected component in Δ) does not alter any topological summary apart from disregarding the infinite addend.
§ MAIN RESULTS
§.§ Choice of Functionals
Recall from (<ref>) that the filtration { G()_ρ∈ℝ} is defined in terms of the choice of field K (typically taken to be ℤ_2) used to compute homology, the complex Δ (in this paper we choose 2^), and a functional F_ρΔ→ℝ.
In this work we choose as F a q-deformed version of the total correlation measure defined in (<ref>).
For a quantum state ρ∈(_) and a subset J⊆, the q-deformed total correlation for q>1 is defined as
C_q(J)_ρ = ∑_v ∈ JS_q(v)_ρ - S_q(J)_ρ,
where the sum runs over all vertices v in the simplex J, and S_q is the Tsallis entropy defined in (<ref>).
In the limit q → 1, Eq. (<ref>) gives the total correlation measure from (<ref>),
C(J)_ρlim_q→ 1 C_q(J)_ρ = D(ρ_J⊗_v ∈ Jρ_v).
As an example, for J={A,B} we have C(J) = I(A:B) = S(A)+S(B)-S(AB), the mutual information of the quantum state ρ_AB.
For J={A,B,C}, we obtain
C({A,B,C})_ρ = D(ρ_ABCρ_A⊗ρ_B⊗ρ_C) = S(A)+S(B)+S(C)-S(ABC).
The total correlation C(J)_ρ has a pleasing operational interpretation as the total amount of local noise that the individual parties in J have to apply one by one in order to decouple their systems from the rest <cit.>.
The case q=2 corresponds to using the linear entropy S_2(J)_ρ in Eq. (<ref>).
We will see in Section <ref> below that this measure is intimately related to an entanglement measure called the n-tangle <cit.>.
We first show that C_q(J)_ρ is monotonic under partial trace and thus determines a proper sublevel set filtration for all q≥ 1, as explained in Section <ref>:
Let ρ∈(_) and q≥ 1.
If I⊆ J⊆, then
C_q(I)_ρ≤ C_q(J)_ρ.
The Tsallis entropy S_q is subadditive for q>1<cit.>, which shows that
S_q(J) ≤ S_q(I) + S_q(J∖ I) ≤ S_q(I) + ∑_v∈ J∖ I S_q(v).
We then have
C_q(J) - C_q(I) = ∑_x∈ J S_q(x) - S_q(J) - ∑_y∈ I S_q(y) + S_q(I)
= ∑_v∈ J∖ I S_q(v) + S_q(I) - S_q(J) ≥ 0
by (<ref>), which proves the claim for q>1.
For q=1, we may use the subadditivity S(AB)≤ S(A)+S(B) of the von Neumann entropy directly.
It is evident from definition (<ref>) that the q-deformed total correlation C_q is invariant under local unitaries (LU): Let U = U_1 ⊗…⊗ U_n for some local unitaries U_i acting on the i-th system A_i, respectively.
Then for any J⊆ = { A_1,…,A_n} and an arbitrary state ρ∈(_) we have C_q(J)_ρ = C_q(J)_Uρ U^†, which follows from the unitary invariance of the Tsallis entropy S_q.
Hence, LU-equivalent states have identical barcodes, the converse of which we record for future reference.
If ρ and σ are two quantum states with different barcodes with respect to the deformed total correlation, then ρ and σ are LU-inequivalent.
The Tsallis entropy S_q is continuous with respect to the trace distance ρ-σ_1 |ρ-σ| (where |X|√(X^† X)) on quantum states (see, e.g., <cit.>).
This also implies continuity for the q-deformed total correlation C_q defined in (<ref>) above.
We then have the following stability property of the persistence diagrams <cit.>:
Given two quantum states ρ,σ that are close in trace distance, the persistence diagrams of the persistence complexes of ρ,σ defined in terms of C_q are close in the bottleneck distance.
In particular, any of the topological summaries defined in <Ref> is continuous with respect to the quantum state defining the persistence complex.
Finally, we define a q-deformed version of the interaction information introduced in (<ref>).
For a quantum state ρ∈(_) and q>1,
I_q()_ρ∑_J⊆(-1)^|J|-1S_q(J)_ρ,
where we set S_q(∅)_ρ = 0.
We will see in the next section that the q-deformed interaction information I_q arises as the topological summary of a persistence complex built from the q-deformed total correlation C_q.
We note that the functional in Eq. <ref> corresponds to a particular parameterization of a “state index" function proposed in the recent work <cit.> that also leveraged homological tools to understand multipartite entanglement.
Therein, the state index function was proposed as the “Euler characteristic” of some associated non-commutative geometry attached to a density state <cit.>. As we demonstrate below, this quantity does indeed correspond to an integrated Euler characteristic in the context of persistent homology.
§.§ Correlation measures as topological summaries of persistence complexes
We now show that multipartite entanglement measures and inequalities may arise as topological summaries of persistence diagrams.
Recalling that we set Δ = 2^∖∅, we clearly have β_i>0(Δ) = 0 = 1-β_0(Δ).
By computing the reduced persistent homology using the q-deformed total correlation functional C_q(J)_ρ defined in (<ref>) above, we obtain our first main result:
Let ρ∈(_) be a quantum state on ={ A_1,…,A_n} and let P be the corresponding persistence module defined via (<ref>) in terms of the q-deformed total correlation C_q in (<ref>).
Then the reduced integrated Euler characteristic 𝔛_q(∞) of this complex is equal to the q-deformed interaction information I_q in (<ref>):
𝔛_q(∞) = ∑_J ⊆(-1)^|J|-1S_q(J)_ρ I_q()_ρ.
Let us denote by the largest value of the filtration parameter at which the number of barcodes changes, and let n_k() be the number of k-simplices existing at time .
The Euler characteristic 𝔛() for fixed can be expressed in terms of the n_k as <cit.>
𝔛() = ∑_k=0^∞ (-1)^k n_k ().
For the integrated Euler characteristic we thus obtain
𝔛() = ∫_0^∑_k=0^∞ (-1)^k n_k() d = ∑_k=0^∞ (-1)^k ∫_0^ n_k() d.
Since a simplex J is born at time C_q(J), it will contribute ( - C_q(J)) to the integral.
Hence, denoting by Δ_k the (complete) set of k-simplices, we have
∫_0^ n_k() d = ∑_J∈Δ_k ( - C_q(J)) = nk+1 - ∑_J∈Δ_kC_q(J).
Substituting this in (<ref>) gives
𝔛() = ∑_k=0^∞(-1)^k nk+1 - ∑_k=0^∞ (-1)^k ∑_J∈Δ_k C_q(J)
= - ∑_J∈Δ (-1)^|J|-1 C_q(J),
where we used the binomial identity ∑_j=0^n (-1)^j nj=0 in the last equality.
We calculate the second term in (<ref>) using the definition C_q(J)=∑_x∈ J S_q(x) - S_q(J) of the q-deformed total correlation:
- ∑_J∈Δ (-1)^|J|-1 C_q(J) = - ∑_J∈Δ (-1)^|J|-1∑_x∈ J S_q(x) + ∑_J∈Δ (-1)^|J|-1 S_q(J)
= ∑_J∈Δ (-1)^|J|-1 S_q(J).
The second equality follows from ∑_J∈Δ (-1)^|J|∑_x∈ J S_q(x) = 0, which can be proved using induction over the number of vertices in Δ.
In summary, we obtain the following formula for the integrated Euler characteristic:
𝔛() = + ∑_J∈Δ (-1)^|J|-1 S_q(J).
Taking the reduced homology amounts to omitting in 𝔛_q(_max) the contribution of one of the 0-simplices.
This removes the term in (<ref>), and so we arrive at
𝔛_q(∞) = ∑_J ⊆(-1)^|J|-1S_q(J)_ρ,
which concludes the proof.
We note that the result of <Ref> does not depend on the choice of field K made in <Ref>.
Moreover, it follows from (<ref>) in the proof of <Ref> that, given any functional F satisfying the monotonicity property (<ref>), the reduced integrated Euler characteristic 𝔛_F(∞) of the persistence complex defined in terms of F is given by the following formula:
𝔛_F(∞) = ∑_J⊆ (-1)^|J| F(J).
This identity may help in identifying other functionals F that give rise to persistence complexes with interesting topological summaries such as the reduced integrated Euler characteristic.
Eq. (<ref>) identifies a correspondence between the q-deformed total correlation C_q(J)_ρ and the total mutual information I_q(J)_ρ, given by the following identities:
I_q(J)_ρ = -∑_K≤ J(-1)^|K|C_q(K)_ρ,
C_q(J)_ρ = ∑_K ≤ J(-1)^|K|I_q(K)_ρ,
where we have assumed I_q(K)_ρ = 0 if |K| ≤ 1 (our convention is ρ_K = 1 if |K|=0).
Somewhat surprisingly, the choice q=2 when is a set of n qubit labels (with n even) yields an important entanglement monotone: the Minkowski length of the generalized Bloch vector, which is sometimes called Stokes tensor.
This quantity corresponds to the n-tangle <cit.>, which is defined for a state ρ on an even number of qubits as
τ_n(ρ) = (ρσ_2^⊗ nρ^* σ_2^⊗ n)
where σ_2=([ 0 -i; i 0 ]) is the Pauli Y-matrix and ρ^* denotes the entry-wise complex conjugate of ρ.
Define now the generalized n-qubit Bloch vector (Q_(i_1,…, i_n))_i_1,…,i_n via
Q_(i_1,…, i_n) = (ρ(σ_i_1⊗…⊗σ_i_n)), i_j = 0,1,2,3,
where σ_0 = I_2 is the identity, σ_μ for μ =1,2,3 are the three Pauli matrices, and 1/2(σ_μσ_ν) = δ_μν<cit.>.
An invertible SLOCC transformation corresponds to a local action of the special linear group SL(2,ℂ)<cit.>.
This induces a transformation of the generalized Bloch vector coordinates in (<ref>) via local operations from the isomorphic group O_0(1,3), defined as the proper Lorentz group that leaves invariant the Minkowski length of the Bloch vector <cit.>:
Q_(n)^2≡1/2^n∑_ι∈ I(-1)^|ι|Q_ι^2.
Here, I is the set of multi-indices (i_1,…,i_n) and for ι∈ I we define |ι| to be the number of qubits for which σ_ι⊗_j ∈ [n]σ_ι_j acts non-trivially.
As an example, if n=4 and ι = (0,2,3,0) then |ι| = 2.
While for n odd the Minkowski length (<ref>) vanishes on pure states ρ, for n even the Minkowski length of the Bloch vector exactly corresponds to the n-tangle <cit.>:
Q_(n)^2 = τ_n(ρ).
The generalized Bloch vector also gives a convenient way to express the purity, and therefore the linear entropy, of reduced density matrices. In particular we can write <cit.>
S_2(J)_ρ = 1-1/2^|J|∑_ι∈ I_JQ_ι ^2,
where I_J ⊆ I corresponds to the multi-indices in I with non-trivial support only in the subset J, along with the trivial identity Pauli string.
Continuing our example above with n=4 qubits, the set I_J for the two middle qubits (assuming for simplicity a one-dimensional chain) has the form ι = (0,i,j,0).
The alternating sum form of the Minkowski length Q_(n)^2 in (<ref>) resembles that of Eq. (<ref>).
In fact, this similarity is not just coincidental, as observed in our next main result:
Let ρ∈(_) be a quantum state on n qubits (with n even), and let P be the corresponding persistence module defined via (<ref>) in terms of the total correlation C_2 in (<ref>).
Then the reduced integrated Euler characteristic 𝔛_2(∞) of this complex is equal to the Minkowski length of the generalized Bloch vector, and hence also equal to the n-tangle τ_n:
𝔛_2(∞) = I_2()_ρ =Q_(n)^2= τ_n(ρ).
Starting from the definition of I_2 in (<ref>), we have
I_2()_ρ = ∑_J ∈Δ(-1)^|J|-1S_2()_ρ
= ∑_J∈Δ(-1)^|J|-1(1-1/2^|J|∑_ι∈ I_JQ_ι ^2)
= 1 + ∑_J ∈Δ(-1)^|J|1/2^|J|∑_ι∈ I_JQ_ι ^2.
We exchange the two summations by summing over ι∈ I, weighted by the number of sets J ∈Δ that are supersets of supp(ι). We then have
I_2()_ρ = 1 + ∑_ι∈ IQ_ι^2 ∑_J ⊇supp(ι), J ∅ (-2)^-|J|
=1/2^n∑_ι∈ I(-1)^|ι|Q_ι^2.
In the final equality we use the fact that ∑_J ⊇supp(ι), J ∅ (-2)^-|J| = (-1)^|ι|/2^n for |ι| >0. For the special case of |ι| = 0, we have
∑_J ⊇supp(ι), J ∅ (-2)^|J| = ∑_J ∈Δ (-2)^|J| = 2^-n - 1,
which yields the result as claimed.
The fact that 𝔛_2(∞) is equal to the n-tangle shows that 𝔛_2(∞) is monotonic on average under local operations and classical communications (LOCC).
To wit, any LOCC operation Λ on _ has a Kraus decomposition
Λ(ρ) = ∑_iK_i^†(ρ)K_i,
where K_i⊗_A_i∈ K_A_i and ∑_iK_i^†K_i=1__.
This operation yields post-measurement states σ_i= K_i^†ρ K_i/(K_i^†ρ K_i) with probability p_i = K_i^†ρ K_i.
An entanglement measure F is now called an entanglement monotone under LOCC <cit.> if
F(J)_ρ≥∑_ip_iF(J)_σ_i.
By direct analogy, we say a topological summary (TS) is an entanglement monotone if
T^ρ≥∑_ip_iT^σ_i,
where T^ρ denotes the TS for state ρ.
It now follows from Theorem <ref> that the reduced integrated Euler characteristic 𝔛_2(∞) = I_2()_ρ is an entanglement monotone.
Identifying conditions under which topological summaries serve as entanglement monotones in the sense of (<ref>) is an intriguing research direction that we will explore in future work.
Eltschka and Siewert <cit.> defined a quantity C_D(ψ) called distributed concurrence, whose square can be expressed (in our notation) as
C_D^2(ψ) = 2 ∑_J⊆ (-1)^|J|-1 S_2(J)_ρ.
They showed that both C_D and C_D^2 are entanglement monotones in the case of n qubits, and C_D^2 is an entanglement monotone if no local dimension exceeds 3.
Evidently, the expression of C_D^2 in (<ref>) coincides with the Euler characteristic 𝔛_2(∞) = I_2()_ρ in Theorem <ref> (up to an irrelevant factor of 2).
These quantities are therefore entanglement monotones also in the more general case of an n-partite system whose local dimensions are at most 3.
Moreover, in <cit.> the authors noticed that (<ref>) is reminiscent of an Euler characteristic and pondered the question whether the distributed concurrence can be assigned a topological meaning.
Here, we answer this question in the affirmative by realizing C_D^2 as the Euler characteristic of a topological space defined in terms of the multipartite entanglement structure of a given quantum state.
More precisely, C_D^2 arises as the integrated Euler characteristic of the persistence complex of a quantum state defined in terms of the 2-deformed total correlation C_2 as given in (<ref>).
A practical consequence of Theorem <ref> is that persistent barcodes provide finer information about multipartite entanglement than 𝔛_2(∞) = τ_n alone, since the latter is the topological summary of the topological data encoded in the persistent barcodes.
We illustrate this in the following with two examples, which make use of the software package Javaplex <cit.> to compute persistence barcodes.[MATLAB code to reproduce the results shown here is available at <https://github.com/felixled/entanglement_persistent_homology>.]
For the first example, we recall the notion of a graph state<cit.>:
Let |+⟩ = (|0⟩+|1⟩)/√(2) be the “plus” state of a qubit system ℂ^2 with orthonormal basis { |0⟩,|1⟩}, and let CZ denote the controlled Z-gate, defined as the two-qubit operation CZ |0⟩⟨ 0|⊗σ_0 + |1⟩⟨ 1|⊗σ_3 (with σ_3 the Pauli-Z-operator).
For a simple undirected graph G=(V,E) with vertex set V and edge set E, the graph state |G⟩∈ (ℂ^2)^⊗ |V| on |V| qubits is defined as
|G⟩∑_e∈ E CZ_e |+⟩^⊗ |V|,
where the notation CZ_e means that CZ acts on the two vertices incident with the edge e∈ E.
For our example we choose |V|=6 and let G_1 and G_2 be the two graphs depicted on the left-hand side of the top and bottom rows of Figure <ref>, respectively.
Note that the graph state |G_2⟩ is local unitarily equivalent to a GHZ state |ϕ⟩ = (|0⟩^⊗ 6 + |1⟩^⊗ 6)/√(2)<cit.>.
It was shown in <cit.> that a graph state on n qubits has n-tangle equal to 1 if all vertices have odd degree, and 0 otherwise.
Since both G_1 and G_2 satisfy the former condition, we have τ_n(|G_1⟩) = 1 = τ_n(|G_2⟩).
We now consider the persistence complexes for |G_i⟩ induced by the 2-deformed total correlation C_2 from (<ref>).
The resulting persistence barcodes, computed in MATLAB using the software package Javaplex <cit.>, are shown in Figure <ref>.
By Theorem <ref> and <ref>, the n-tangle τ_n is equal to the reduced integrated Euler characteristic, which in turn equals the alternating sum of the lengths of all barcodes (omitting the infinitely long persisting 0-dim. barcode).
One can check using the data in Figure <ref> that we indeed have τ_n(|G_1⟩) = 1 = τ_n(|G_2⟩) using the barcode data, as predicted by the results of <cit.>.[For example, for the graph G_1 shown in the top row of Figure <ref>, there are 5 (finite) 0-dim. barcodes of length 1/4 each, 61-dim. barcodes of length 1/4 and 41-dim. barcodes of length 1/2, 102-dim. barcodes of length 1/2, 43-dim. barcodes of length 1/2 and one 3-dim. barcode of length 3/4, and finally one 4-dim. barcode of length 1. It follows that
τ_n = 1 = 5·1/4 - 4·1/2 - 6·1/4+10 ·1/2 -4 ·1/2- 3/4+1.
]
However, the barcodes in Figure <ref> are evidently different, which by Lemma <ref> implies that |G_1⟩ and |G_2⟩ are LU-inequivalent.
Since LU-equivalence is the same as SLOCC-equivalence for graph states <cit.>, Lemma <ref> actually implies that |G_1⟩ and |G_2⟩ are in different SLOCC orbits, which is revealed by the rather different topological structures of the underlying persistence complexes.
For example, for the filtration interval ∈[1.25,1.5] the graph state |G_1⟩ shows the simultaneous existence of two 2-dimensional barcodes and a 3-dimensional barcode, whereas in the persistence complex for |G_2⟩ all 2-dimensional holes are immediately closed.
As another example, we consider the normalized versions of the following two 6-qubit vectors, defined in terms of the computational basis { |0⟩,|1⟩} and a parameter t>0:
|χ_4⟩ ∝1/t |111111⟩ + 1/t|111100⟩ + t|000010⟩ + t|000001⟩
|χ_5⟩ ∝√(2)/t |111111⟩ + 1/t |111000⟩ + t |000100⟩ + t|000010⟩ + t |000001⟩.
These states arise from applying the invertible SLOCC-operation A_t = t|0⟩⟨ 0| + 1/t |1⟩⟨ 1| to the first qubit of the states Ξ_4 and Ξ_5 defined in <cit.>.
The states Ξ_4 and Ξ_5 both have vanishing n-tangle and lie in different SLOCC-equivalence classes <cit.>.
The same is hence true for the states χ_4 and χ_5 as well, but we note that neither χ_4 and χ_5 are in normal form for t≠ 1, since its single-qubit marginals are not all completely mixed.
SLOCC-inequivalence can hence not be inferred just from LU-inequivalence alone <cit.>.
We choose t=4/3 in (<ref>) and compute the persistence complexes of χ_4 and χ_5 with respect to the 2-deformed total correlation C_2 (whose integrated Euler characteristic equals the n-tangle).
The resulting barcodes are shown in <Ref> and reveal the different entanglement structures of these two states, which we take as indicating their SLOCC-inequivalence.
Note that, even though the n-tangle is invariant under local SL(2,ℂ)-operations <cit.>, it cannot itself be used to detect SLOCC-inequivalence of two normalized states.
This is because the n-tangle is homogeneous of degree 2 (as a function of an operator) and hence changes its value under normalization.
As a result, SLOCC-equivalent normalized n-qubit states (with n even) may have different n-tangle values.
This can be fixed by taking another homogeneous SLOCC invariant f(ψ) of degree k, and defining the functional g(ψ)τ_n(ψ)/f(ψ)^2/k provided that f(ψ)≠ 0.
The functional g is still SLOCC-invariant, and in addition invariant under scaling as a ratio of homogeneous functions of the same degree.
It follows that two states ψ_1 and ψ_2 with f(ψ_i)≠ 0 are SLOCC-inequivalent if g(ψ_1)≠ g(ψ_2).
The proof of <Ref> shows that the integrated Euler characteristic changes linearly with a rescaling of the functional F defining the persistence complex.
Hence, for analyzing the SLOCC-equivalence of two given states ψ_1 and ψ_2 we may apply the construction above: choose a homogeneous SLOCC invariant f(ψ) of degree k with f(ψ_i)≠ 0 and redefine the 2-deformed total correlation as C_2 C_2/f(ψ)^2/k.
The resulting integrated Euler characteristic is equal to the functional g from above, and the barcodes still encode finer information about the entanglement structure.
For the two states χ_4 and χ_5 in (<ref>), the SLOCC invariant in eq. (7) of <cit.> with _q = { 4,5} and l=6 is non-zero on each, and can hence be used to rescale the persistence complex as described above.
The resulting barcodes are qualitatively identical to those in <Ref>.
We conjecture that such rescaled persistence barcodes can be used more generally to detect SLOCC-inequivalence.
To summarize, the examples in Figures <ref> and <ref> suggest that the persistence barcodes convey more information about the entanglement structure than its topological summary, the n-tangle, alone.
Intuitively, the integrated Euler characteristic forgets the persistence of the homology groups, while this data is represented in the barcodes.
An interesting question for future research is to determine whether the barcodes themselves, or other functions thereof, have an operational interpretation in terms of entanglement monotones.
We conclude this section by stressing that the persistence complexes considered in this paper are defined in terms of the q-deformed total correlation, a function of entropic quantities evaluated on marginals of a multipartite state (see <Ref>).
Such correlation measures are limited in detecting multipartite entanglement, as the following example illustrates <cit.>: Consider a tripartite system A_1A_2A_3 with each system consisting of two qubits, A_i = A_i,1A_i,2 for i=1,2,3.
Denote by |χ_n⟩ = 1/√(2)(|0⟩^⊗ n + |1⟩^⊗ n) an n-partite GHZ state, and consider the two states
|ψ_1⟩_A_1A_2A_3 = |χ_2⟩_A_1,1A_2,1⊗ |χ_2⟩_A_1,2A_3,1⊗ |χ_2⟩_A_2,2A_3,2
|ψ_2⟩_A_1A_2A_3 = |χ_3⟩_A_1,1A_2,1A_3,1⊗ |χ_3⟩_A_1,2A_2,2A_3,2.
In the state ψ_1 each party shares maximally entangled Bell states with the other two parties, whereas in ψ_2 the three parties share two copies of a tripartite GHZ state.
The spectra of all marginals of the two states coincide, and hence the same is true for all functions computed from these spectra, in particular entropies.
But the entanglement in ψ_1 and ψ_2 is rather different operationally, and this is witnessed mathematically e.g. by computing the log-negativity log(ψ_k)_A_iA_j^T_j_1 (with X^T_j denoting the partial transpose over system A_j) of any two-party marginal of the two states.
We leave it as an another open question for future research to determine functionals giving rise to persistence complexes that can detect the different entanglement structures of the states in (<ref>).
§.§ Relative homology and strong subadditivity
Finally, we discuss the concept of relative homology, which is closely related to the reduced homology discussed in Section <ref>, and particularly relevant for our setting of viewing a multipartite state as a simplicial complex.
To define this concept for a given complex Δ, we fix a subcomplex K ≤Δ and consider the relative chain groups _i(Δ)/_i(K), for which the induced boundary map ∂_k' on the quotient group defines the relative homology H_k(Δ,K) = ∂_k' / im∂_k+1'.
We use superscripts to denote the subcomplex with respect to which the relative homology is taken.
For example, 𝔛^Δ_S is the integrated Euler characteristic (IEC) relative to the subcomplex Δ_S = 2^S for a given subset S ⊆.
Quite remarkably, it turns out that the IEC of a relative persistence complex is intimately connected with strong subadditivity (SSA) <cit.>, a fundamental entropy inequality.
For a tripartite quantum state ρ_ABR, this inequality states that
I(A:B|R) S(AR) + S(BR) - S(R) - S(ABR)≥ 0,
where the quantity I(A:B|R) is known as quantum conditional mutual information.
Let now ={ A,B,R} be a tripartite system, and take the reduced homology on with respect to a subset { A,B} for a given quantum state ρ_ABR and the total correlation C(·) defined in (<ref>).
Intuitively, the computation of the relative IEC 𝔛^Δ_{A,B}(∞) in this setting amounts to omitting all contributions from simplices contained in { A,B}.
This leads to the following intriguing result:
Let ρ_ABR be a tripartite quantum state and let P^Δ_A,B be the corresponding persistence module relative to the subcomplex Δ_A,B, defined in terms of the total correlation C(·) from (<ref>).
We then have that
𝔛^Δ_{A,B}(∞)lim_q→ 1𝔛_q^Δ_{A,B}(∞) = -I(A:B|R)_ρ≤ 0.
We denote once again by _max the largest value of the filtration parameter at which the number of barcodes changes.
Functionally, the relative IEC includes contributions from all simplices except for those that occur only in Δ_A,B.
Written out, we have
𝔛^Δ_{A,B}(_max) = (_max - C(R)) -(_max - C(BR))
-(_max - C(AR))+ (_max - C(ABR))
= C(AR) + C(BR) - C(R) - C(ABR)
= -S(AR) - S(BR) + S(R) + S(ABR)
= -I(A:B|R),
which proves the equality in (<ref>).
The following inequality follows from (<ref>).
Theorem <ref> says that the relative IEC of a persistence complex defined on a tripartite quantum state is always non-positive because of SSA.
This may suggest a deeper connection between entropy inequalities and the persistence complex of a multipartite quantum state, which will be explored in future work.
We can apply the same idea to a bipartite state ρ_AB and take the relative homology with respect to one of the vertices.
This draws a connection to a special case of (<ref>) called subadditivity, S(A)+S(B)-S(AB)≥ 0.
In this case, the relative IEC is non-negative:
Let = {A,B}. Then the relative persistent homology yields
lim_q→ 1𝔛_q^Δ_{A}(∞) = lim_q→ 1𝔛_q(∞) = I(A:B) ≥ 0,
Note that Corollary <ref> cannot simply be obtained from Theorem <ref> via taking R=1 because of the different sets of simplices over which the IEC is summed.
§ CONCLUSION AND FUTURE DIRECTIONS OF RESEARCH
In this work we have proposed a method of analyzing multipartite entanglement using persistent homology.
The key idea is to view the local systems of a multipartite quantum system = { A_1,…,A_n} as vertices on which we define a simplicial complex in terms of a correlation measure evaluated on subsets of vertices for a given quantum state: a simplex J⊆ is added if the correlation measure does not exceed a given value .
Varying this parameter then defines a filtration of simplicial complexes called the persistence complex, and homological tools can be used to track how the topology changes as a function of .
The persistence complex is uniquely represented using persistence barcodes such as those in Figures <ref>, <ref> and <ref>.
These barcodes encode the relevant topological information about the underlying persistence complex, and can be further processed to yield topological summaries such as the integrated Euler characteristic (IEC).
Our main results show that the persistence complex built with a deformed version of the total correlation yields a deformed version of the interaction information as its IEC.
Further specializing this to the total correlation based on the linear entropy yields the Minkowski length of the generalized Bloch vector as the IEC, which for an even number of qubits is known to be equal to the n-tangle, a well-known entanglement monotone.
Our persistent homology pipeline thus yields a topological summary that coincides with an operationally relevant measure in entanglement theory.
At the same time, the “topological fingerprint” of a quantum state encoded in its barcode conveys more information about the underlying entanglement structure than the topological summary alone.
We illustrate this with examples of pairs of states with the same n-tangle, but lying in different SLOCC orbits, which is indicated by the corresponding barcodes (Figures <ref> and <ref>).
Finally, we draw a connection between the relative homology of a quantum state and the fundamental strong subadditivity property of the von Neumann entropy.
While our examples exhibited in Figures <ref> and <ref> demonstrate that barcodes may distinguish between different entanglement structures, it would be interesting to find a concrete operational interpretation of these different topological features on the level of barcodes.
Moreover, the emergence of the n-tangle as a topological summary of a persistence complex suggests that our approach could lead to the identification of other entanglement monotones or operationally motivated correlation measures as topological invariants.
In principle, any multipartite entanglement functional satisfying monotonicity under partial trace can be used to build a persistence complex using our construction.
Examples of functionals with this property are the secrecy monotone <cit.>, the concentratable entanglement <cit.>, and what we term the “powerset mean” of any entanglement measure satisfying strong subadditivity (SSA).
Another natural candidate for a functional monotonic under partial trace is a so-called generalized divergence <cit.>.
To see this, recall that in the special case q→ 1 the total correlation C(J)_ρ of a state ρ on _ evaluated on a subset J⊆ can be written as
C(J)_ρ = D(ρ_ρ_A_1⊗…⊗ρ_A_n) = min_σ_A_i D(ρ_σ_A_1⊗…⊗σ_A_n),
where D(··) denotes the quantum relative entropy, and the minimization in the second equality is over all states σ_A_i on the single systems A_i.
The expression (<ref>) is particularly useful, since the monotonicity property C(I)≤ C(J) for I⊆ J is a direct consequence of the data processing inequality for D(··).
Moreover, it expresses the functional C(J) as the relative entropy distance to the set of fully uncorrelated states.
This observation leads us to consider the following generalization.
Let (··) be any generalized divergence<cit.>, that is, a functional on pairs of positive semidefinite operators satisfying the data processing inequality.
Then given a pair of quantum states (ρ_,σ_), we may consider the persistent homology of ρ_ relative to σ_ induced by the functional (··).
This approach is particularly amenable to the setting of a general resource theory consisting of a set of “free states” and a set of allowed operations <cit.>.
For example, in (bipartite) entanglement theory is the set of separable states, and LOCC is the set of allowed operations.
One then considers an entanglement measure called the relative entropy of entanglement<cit.>, defined as E_R(ρ) min_σ∈ D(ρσ).
For a given resource theory with a fixed set of free states that is stable under taking partial traces and a suitable generalized divergence (··), we may now consider the functional (ρ)min_σ∈(ρσ) for our persistent homology.
Inspired by Theorem <ref>, we expect our framework to yield interesting monotones for general resource theories, a thorough investigation of which will be the subject of future work.
*Acknowledgments.
The authors acknowledge helpful conversations with Henry Adams, Yuliy Baryshnikov, Jacob Beckey, Eric Chitambar, Jens Siewert, Juan Pablo Vigneaux, Michael Walter and Shmuel Weinberger.
This research was partially supported through the IBM-Illinois Discovery Accelerator Institute.
[heading=bibintoc]
|
http://arxiv.org/abs/2307.04476v1 | 20230710105503 | Nitrogen isotope effects on boron vacancy quantum sensors in hexagonal boron nitride | [
"Kento Sasaki",
"Takashi Taniguchi",
"Kensuke Kobayashi"
] | quant-ph | [
"quant-ph",
"cond-mat.mes-hall",
"cond-mat.mtrl-sci"
] |
AIP/123-QED
[email protected]
Department of Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
Research Center for Materials Nanoarchitectonics, National Institute for Materials Science, 1-1 Namiki, Tsukuba 305-0044, Japan
[email protected]
Department of Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
Institute for Physics of Intelligence, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
Trans-scale Quantum Science Institute, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
Recently, there has been growing interest in researching the use of hexagonal boron nitride (hBN) for quantum technologies.
Here we investigate nitrogen isotope effects on boron vacancy (V_B) defects, one of the candidates for quantum sensors, in ^15N isotopically enriched hBN synthesized using metathesis reaction.
The Raman shifts are scaled with the reduced mass, consistent with previous work on boron isotope enrichment.
We obtain nitrogen isotopic composition dependent optically detected magnetic resonance spectra of V_B defects and determine the hyperfine interaction parameter of ^15N spin to be -64 MHz. Our investigation provides a design policy for hBNs for quantum technologies.
Nitrogen isotope effects on boron vacancy quantum sensors in hexagonal boron nitride
Kensuke Kobayashi
August 12, 2023
====================================================================================
Localized electron spins in solids, such as those in color centers or quantum dots, are the promising platform of quantum technologies.
In most cases, they couple with surrounding nuclear spins; thus, controlling the nuclear spins and their influence are essential.
The isotope enrichment technique has great potential to address this issue<cit.>.
For example, the electron spin coherence time can be improved by enriching nuclear-spin-free isotopes<cit.>, or the the electron spin qubit can be labeled by isotopes with low natural composition ratios<cit.>.
To design such isotopically purified platform, it is crucial not only to synthesize isotopically controlled materials but also to estimate the isotopic composition and determine the hyperfine interaction (HFI) parameters of nuclear spins of the isotopes<cit.>.
Recently, it was discovered that electron spins of boron vacancy (V_B) defects in hexagonal boron nitride (hBN) can be used as quantum sensors even at room temperature<cit.>.
A V_B defect has a structure in which a boron atom in hBN is replaced by a vacancy [Fig. <ref>(a)].
Its electron spin is localized around the vacancy site and is significantly affected by the three nearest nitrogen spins.
Stable isotopes of nitrogen are ^14N and ^15N.
The natural composition ratio of ^14N is 99.6%, and ^15N is almost nonexistent (0.4%).
The nuclear spin is one of the major differences between these isotopes.
Since ^15N spin (I=1/2) is only half of ^14N spin (I=1), V_B defects in ^15N isotopically enriched hBN have fewer energy levels than in non-treated hBN.
The higher the occupancy of each level is, the stronger the resonance signal becomes, leading to higher sensitivity.
However, there are few reports on the isotopic enrichment of hBN, most of which are related to boron isotopes<cit.>.
Here we investigate nitrogen isotope-enriched hBN and observe nitrogen isotope effects on V_B defects.
We synthesized the isotopically controlled hBN crystals using metathesis reaction under high pressure<cit.> with commercially available ^15NH_4Cl.
The Raman shifts of the samples are scaled with their reduced mass, which is the effective mass for an equivalent one-body problem of the two-body vibration problem for boron and nitrogen atoms, consistent with previous work on boron isotope enrichment.
We perform optically detected magnetic resonance (ODMR) of V_B defects produced by helium ion implantation and determine the HFI parameter of ^15N spin to be -64 MHz.
The observed significant modification of resonance spectra due to ^15N isotope enrichment will help improve sensitivity, control fidelity, and precise positioning of quantum sensors.
Our investigation provides guidance for the material design of hBNs for quantum technologies.
First, we describe the influence of nitrogen spins on an electron spin (S = 1) of a V_B defect.
In quantum sensing, an external magnetic field of several mT in the direction of the symmetry axis (z) of the V_B defect is often applied <cit.>.
It is helpful to mitigate the sensitivity suppression due to the strain.
In that condition, the spin Hamiltonian can be approximated as <cit.>,
Ĥ ∼ D Ŝ_z^2 + γ_e B_z ·Ŝ_z + ∑_j=1^3 A_zz,(j)Ŝ_z Î_z,(j),
where, Ŝ_z is the electron spin (S=1) operator in the z direction, D is the zero field splitting, γ_e = 28 MHz/mT is the gyromagnetic ratio of the electron spin, B_z is the magnetic field strength, j (= 1,2,3) is a label of nearest-neighbor nitrogen site, A_zz,(j) is the HFI parameter, Î_z,(j) is the nuclear spin operator in the z direction.
Here we ignore the nuclear spin's Zeeman effect and the quadrupole moment<cit.>, which are much smaller than the HFI parameter in the case of the ^14N spin.
In this study, we determine the A_zz of ^15N spin, ^(15N)A_zz, that have vital contributions in this quantum sensing condition.
Next, we show a model of the expected ODMR spectrum.
In the situation when Eq. (<ref>) is valid, both electron and nuclear spins are quantized in the z direction.
The resonance frequency corresponding to the electron spin transition m_S = 0 ↔±1 can be expressed as,
f_±1(m_I,(1),m_I,(2),m_I,(3)) ∼ f_±1,0±∑_j=1^3 A_zz,(j) m_I,(j),
where, f_±1,0 = D ±γ_e B_z is the resonance frequency in the absence of nuclear spins, m_I,(j) is the magnetic quantum number of nuclear spins at site j which can take the values m_I=-1,0,+1 for ^14N spin (m_I=-1/2,+1/2 for ^15N spin).
Assuming that the nuclear spins are unpolarized and that each resonance signal has the same amplitude and line width, the ODMR spectrum is given by
R = 1 - C/N_level∑ L( f_±1(m_I,(1),m_I,(2),m_I,(3)), dν),
where C is the signal amplitude and L(f,dν) is the Lorentzian with a center frequency f and a full width at half maximum dν.
N_level is the number of possible nuclear states of the nearest-neighbor nitrogen spins (m_I,(1),m_I,(2),m_I,(3)), and the summation symbol means summing concerning those states, which will be explained in detail below.
The resonance spectrum [Eq. (<ref>)] of a V_B defect depends on the number n of ^15N among the nearest nitrogen atoms.
We distinguish V_B defects by #n as shown in Figs. <ref>(b–e).
The energy level splittings of these defects are shown in Figs. <ref>(f–i).
Since ^14N spins can take three states (m_I=-1,0,+1), whereas ^15N spins can take only two states (m_I=-1/2,+1/2), N_level of #0, #1, #2 and #3 are 27(=3^3), 18(=3^2×2), 12(=3×2^2), and 8(=2^3), respectively.
To the extent that Eq. (<ref>) is satisfied, all states belonging to m_S=0 and some of the states belonging to m_S=±1 are degenerated.
In the case of m_S=-1 of #0 (#3), there are 7 (4) states whose energies are distinguished by the total nuclear spin quantum number, m_I,tot = ∑_j=1^3 m_I,(j).
Specifically, the degeneracy of energy states m_I,tot = -3, -2, -1, 0, +1, +2, and +3 (-3/2, -1/2, +1/2, and 3/2) are 1, 3, 6, 7, 6, 3, and 1 (1, 3, 3, and 1), respectively [see Figs. <ref>(f) and (i)].
The occupancy of the state with the largest degeneracy is 26% (=7/27) for #0 and 38% (=3/8) for #3.
High occupancy leads to a strong signal, which is advantageous for high sensitivity.
The distances between energy states (=resonance lines) depend on the HFI parameter A_zz of ^14N and ^15N spins.
The gyromagnetic ratio, a the magnitude of the magnetic moment of the spin, is γ_14N = 3.077 kHz/mT for ^14N spin and γ_15N = -4.316 kHz/mT for ^15N spin.
Since the absolute value of the gyromagnetic ratio is about 1.4 times larger for ^15N spin than for ^14N spin, the spectral separation should get larger for ^15N isotopically enriched hBN.
The larger separation would be helpful to suppress the degradation of control fidelity caused by unintentional driving of neighboring resonance lines.
In this work, we will demonstrate nitrogen isotope effects described above, such as a reduced number of resonance lines and enhanced separation, which are advantageous for quantum sensing.
When measuring an ensemble of V_B defects, the signals of #0 to #3 are averaged.
Specifically, the expected ODMR spectrum is given by,
R_tot = P_0 R_0 + P_1 R_1 + P_2 R_2 + P_3 R_3,
where, R_n is the ODMR spectrum of #n [Eq. (<ref>)] and P_n is the fraction of #n in all V_B defects.
When ^15N isotopic composition, p_15, is spatially uniform, then P_0 = (1 - p_15)^3, P_1 = 3(1 - p_15)^2 p_15, P_2 = 3(1 - p_15 ) p_15^2, and P_3 = p_15^3.
In cases where p_15 is other than 0 or 1, the obtained signal is the sum of #0 to #3.
Here we describe the preparation of ^15N isotopically enriched hBN crystal.
We verify the metathesis reaction process under high pressure<cit.> using commercially available ^15NH_4Cl reagents as a raw material; NaBH_4 + ^15NH_4Cl = B^15N + NaCl + 4H_2.
By continuing the above reaction for about 30 hours, we obtained hBN crystals, which are expected to be close to perfect ^15N isotopic composition (hB^15N).
Other hBN single crystals of about 1 mm-sized are obtained by using Ba-BN as a solvent system <cit.>, where hBN sources are grown within the molten solvent through dissolution and precipitation.
In this case, the nitrogen isotope enrichment in the resulting crystals (hB^14+15N) is not 100% because nitrogen in Ba-BN solvents has a natural isotopic composition.
The ^15N isotopic composition of hB^14+15N is determined by secondary ion mass spectrometry (SIMS) to be about 60%.
In addition, hBN crystal with a natural composition ratio (hB^14N) is used for comparison.
From now, we describe the experimental results.
All the measurements in this work are performed at room temperature.
First, we investigate isotope effect on the phonon energy due to changes in the reduced mass, using a Raman microscope (Nanophoton RAMAN-FM-UTM).
In previous works on boron isotope enrichment <cit.>, it has been shown that the phonon energy scales with the square root of the reduced mass.
Figure <ref>(a) shows the obtained Raman scattering spectra.
The sample with a natural composition ratio, hB^14N, has a Raman shift of 1366 cm^-1.
This value is consistent with the previous work <cit.>.
On the other hand, the Raman shifts for hB^14+15N and hB^15N are 1355 cm^-1 and 1347 cm^-1, respectively.
Clearly, the Raman shift decreases with increasing ^15N isotopic composition, i.e., with increasing reduced mass.
To quantitatively evaluate this behavior, we show the relationship between Raman shift and reduced mass in Fig. <ref>(b).
We calculate the reduced masses of hB^14N, hB^14+15N, and hB^15N assuming p_15 as 0%, 60% (SIMS), and 100%, respectively.
By analyzing the result of the Ref. Vuong2017, we obtain,
Δν_r ∼ -537 μ^1/2 + 2691,
where, Δν_r is the Raman shift (unit cm^-1) and μ is the reduced mass (no unit).
The crosses and the solid line in Fig. <ref>(b) are the result of Ref. Vuong2017 and Eq. (<ref>), respectively.
The deviation between them is as small as about 1 cm^-1.
Since our results agree with Eq. (<ref>) within the error of about 2 cm^-1, we confirm that our nitrogen isotope enrichment is successful.
Next, we perform ODMR measurements to obtain ^15N isotope effects on V_B defects.
V_B defects are generated by helium ion implantation (acceleration voltage 30 keV, dose 1×10^15 cm^-2) into flakes cleaved with Scotch tape.
The flakes are attached to silicon substrates (with SiO_2 thickness of 90 nm).
We use the homemade confocal microscope <cit.> with optimized optical filters for the photoluminescence (PL) of V_B defects (750∼1000 nm).
A broadband microwave antenna with a copper wire soldered to a coplanar waveguide is used to mitigate unwanted distortions in the broad resonance spectrum of V_B defects.
A permanent magnet is approached from the lower side of the sample in the direction of the optical (z) axis.
Figure <ref>(a) shows the ODMR spectrum (m_S=0↔-1) of hB^14N at B_z ∼ 40 mT.
The broad signal consists of several closely overlapping Lorentzians.
The solid line is the fitted curve using Eq. (<ref>) with p_15 = 0.
It reproduces the experimental result well.
The parameters obtained by this fitting are f_-,0 = 2312 MHz, C = 5.6%, dν = 47 MHz, and ^(14N)A_zz = 43 MHz.
The obtained HFI parameter of ^14N spin is consistent with the values of previous works <cit.> within a typical error of few MHz.
Generally, it is impossible to determine the sign of HFI parameter sign from this fitting.
From the positive zero-field splitting of the ground state <cit.> and the spectral change at the ground state level anticrossing <cit.>, we determine that the sign of ^(14N)A_zz is positive.
Note that C and dν depend on the measurement conditions, such as laser power and microwave amplitude.
Next, we show the result of hB^15N in Fig. <ref>(c).
The resonance spectrum clearly consists of four dips, and their separation is larger than in hB^14N.
These are the nitrogen isotope effects on V_B defects.
The solid line is the fitted curve using Eq. (<ref>) with p_15 = 1 and reproduces the experimental result well.
The parameters obtained by this fitting are f_-,0 = 2308 MHz, C = 11%, and dν = 51 MHz, ^(15N)A_zz = ±64 MHz.
The obtained HFI parameter of ^15N spins, ^(15N)A_zz, is ±1.4 times larger than the ^(14N)A_zz obtained above.
It is reasonable considering the ratio of the gyromagnetic ratio of ^14N and ^15N spins.
Our observation supports a number of benefits of ^15N isotope enrichment we expected above, including increased sensitivity and control fidelity.
This is the central result of this work.
In addition, we measure hB^14+15N and obtained that measured spectrum is consistent with the fitting using the HFI parameters and p_15 = 0.6 [Fig. <ref>(b)].
The reason there are only slight undulations in the spectrum is that it contains all signals from #0 to #3 [see Fig. <ref>].
High isotopic composition is necessary to obtain isotope effects useful for quantum sensing.
Finally, to determine the sign of ^(15N)A_zz, we use dynamic nuclear polarization at the excited state level anticrossing (B_z ∼ 70 mT)<cit.>.
In this situation, the angular momentum of the optically polarized electron spins in V_B defects are transferred to the nuclear spins by flip-flops in the excited state; the nuclear spin polarization increases positively <cit.> independent of nitrogen isotopes.
Figure <ref>(d) is the ODMR spectrum of hB^15N at the magnetic field where we observe the largest polarization.
Compared to the Fig. <ref>(c), there is clearly an increase in the signal on the high-frequency side and a decrease in the signal on the low-frequency side.
The polarization of ^15N spins estimated from the area of spectra <cit.> is 16%.
Since the it is enhanced to 27% when the laser power is increased from 0.6 mW [Fig. <ref>(d)] to 5 mW [Fig. <ref>(e)], we conclude that this behavior is the result of optical polarization.
The trend of the observed change in resonance is opposite to that of conventional samples with natural composition ratios <cit.>.
It means that the sign of the HFI parameter is opposite to ^(14N)A_zz, i.e., ^(15N)A_zz=-64MHz, which is consistent with the different signs of the gyromagnetic ratio of ^14N and ^15N spin.
In this work, we examine nitrogen isotope effects on V_B defects in nitrogen isotopically enriched hBN.
We measure ^15N isotopically enriched hBN crystals synthesized using metathesis reaction under high pressure<cit.>.
In the hBN crystals with different ^15N isotope composition, an isotope effect on phonon energy due to changes in the reduced mass are confirmed.
The HFI parameter of ^15N spin is determined to be -64 MHz from the fitting of ODMR spectra of V_B defects produced by helium ion implantation.
The demonstrated uncomplicated spectrum of hB^15N is beneficial to achieve the high sensitivity.
Further, when combined with ^10B isotope enrichment techniques, the sensitivity will be optimized by improving the coherence properties of V_B defects<cit.>.
Sensor labeling with nitrogen isotopes may enable us to identify multiple sensor locations within a device stacked with two-dimensional materials.
The increased control fidelity and distinct optical polarization resulting from enhanced spectral separation would also make hB^15N useful as a polarization agent <cit.> and platform for quantum information processing.
Furthermore, nitrogen isotope enrichment of hBN is essential in studying color centers other than V_B defects, such as carbon-related defects<cit.>.
Our investigation, which reveals nitrogen isotope effects, is a vital step toward the design of hBN for quantum technologies.
We thank Kenji Watanabe (NIMS) for material preparation and Shu Nakaharai (TUT) for useful discussion, Kohei M. Itoh (Keio) for letting us to use the confocal microscope system, and Ryota Akiyama (UTokyo) for supporting Raman measurement.
This work was partially supported by “Advanced Research Infrastructure for Materials and Nanotechnology in Japan (ARIM)" (Proposal No. JPMXP1222UT1131) of the Ministry of Education, Culture, Sports, Science and Technology of Japan (MEXT), “World Premier International Research Center Initiative on Materials Nanoarchitectonics (WPI-MANA)" supported by MEXT.
This work was supported by Grants-in-Aid for Scientific Research (KAKEN) Nos. JP22K03524, JP19H00656, JP19H05826, JP23H01103, and JP23H02052, and Next Generation Artificial Intelligence Research Center at the University of Tokyo.
36
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Itoh and Watanabe(2014)]Itoh2014
author author K. M. Itoh and author H. Watanabe, title title Isotope
engineering of silicon and diamond for quantum computing and sensing
applications, https://doi.org/10.1557/mrc.2014.32 journal journal MRS Communications volume 4, pages 143–157 (year
2014)NoStop
[Balasubramanian et al.(2009)Balasubramanian, Neumann, Twitchen,
Markham, Kolesov, Mizuochi,
Isoya, Achard, Beck,
Tissler, Jacques, Hemmer,
Jelezko, and Wrachtrup]Balasubramanian2009
author author G. Balasubramanian, author P. Neumann, author D. Twitchen,
author M. Markham, author R. Kolesov, author
N. Mizuochi, author
J. Isoya, author J. Achard, author J. Beck, author J. Tissler, author V. Jacques,
author P. R. Hemmer, author F. Jelezko, and author
J. Wrachtrup, title title Ultralong spin coherence time in isotopically engineered
diamond, https://doi.org/10.1038/nmat2420 journal
journal Nature Materials volume 8, pages 383–387 (year 2009)NoStop
[Ishikawa et al.(2012)Ishikawa, Fu, Santori, Acosta, Beausoleil, Watanabe, Shikata, and Itoh]Ishikawa2012
author author T. Ishikawa, author K.-M. C. Fu,
author C. Santori, author V. M. Acosta, author
R. G. Beausoleil, author
H. Watanabe, author
S. Shikata, and author
K. M. Itoh, title title Optical and spin coherence properties of nitrogen-vacancy
centers placed in a 100 nm thick isotopically purified diamond layer, https://doi.org/10.1021/nl300350r journal journal Nano Letters volume 12, pages 2083–2087 (year 2012)NoStop
[Ohashi et al.(2013)Ohashi,
Rosskopf, Watanabe, Loretz,
Tao, Hauert, Tomizawa,
Ishikawa, Ishi-Hayase, Shikata, Degen, and Itoh]Ohashi2013
author author K. Ohashi, author T. Rosskopf,
author H. Watanabe, author M. Loretz, author
Y. Tao, author R. Hauert, author S. Tomizawa, author T. Ishikawa, author J. Ishi-Hayase, author S. Shikata, author C. L. Degen, and author K. M. Itoh, title title
Negatively charged nitrogen-vacancy centers in a 5 nm thin ^12C
diamond film, https://doi.org/10.1021/nl402286v journal journal Nano Letters volume
13, pages 4733–4738 (year 2013)NoStop
[Muhonen et al.(2014)Muhonen, Dehollain, Laucht, Hudson, Kalra, Sekiguchi, Itoh, Jamieson, McCallum, Dzurak, and Morello]Muhonen2014
author author J. T. Muhonen, author J. P. Dehollain, author A. Laucht,
author F. E. Hudson, author R. Kalra, author
T. Sekiguchi, author
K. M. Itoh, author
D. N. Jamieson, author
J. C. McCallum, author
A. S. Dzurak, and author
A. Morello, title title Storing quantum information for 30 seconds in a
nanoelectronic device, https://doi.org/10.1038/nnano.2014.211
journal journal Nature Nanotechnology volume 9, pages 986–991 (year
2014)NoStop
[Veldhorst et al.(2014)Veldhorst, Hwang, Yang, Leenstra, de Ronde, Dehollain,
Muhonen, Hudson, Itoh,
Morello, and Dzurak]Veldhorst2014
author author M. Veldhorst, author J. C. C. Hwang, author C. H. Yang,
author A. W. Leenstra, author B. de Ronde, author
J. P. Dehollain, author
J. T. Muhonen, author
F. E. Hudson, author
K. M. Itoh, author
A. Morello, and author
A. S. Dzurak, title
title An addressable quantum dot qubit with
fault-tolerant control-fidelity, https://doi.org/10.1038/nnano.2014.216 journal journal Nature Nanotechnology volume 9, pages 981–985 (year 2014)NoStop
[Kleinsasser et al.(2016)Kleinsasser, Stanfield, Banks,
Zhu, Li, Acosta,
Watanabe, Itoh, and Fu]Kleinsasser2016
author author E. E. Kleinsasser, author M. M. Stanfield, author J. K. Q. Banks, author Z. Zhu, author W.-D. Li, author
V. M. Acosta, author
H. Watanabe, author
K. M. Itoh, and author
K.-M. C. Fu, title title High density nitrogen-vacancy sensing surface created via
He^+ ion implantation of ^12C diamond, https://doi.org/10.1063/1.4949357 journal journal Applied Physics Letters volume 108, pages 202401 (year 2016)NoStop
[Rabeau et al.(2006)Rabeau,
Reichart, Tamanyan, Jamieson,
Prawer, Jelezko, Gaebel,
Popa, Domhan, and Wrachtrup]Rabeau2006
author author J. R. Rabeau, author P. Reichart,
author G. Tamanyan, author D. N. Jamieson, author
S. Prawer, author F. Jelezko, author T. Gaebel, author I. Popa, author M. Domhan, and author J. Wrachtrup, title title Implantation
of labelled single nitrogen vacancy centers in diamond using ^15N, https://doi.org/10.1063/1.2158700 journal journal Applied Physics Letters volume 88, pages 023113 (year 2006)NoStop
[van Dam et al.(2019)van
Dam, Walsh, Degen, Bersin,
Mouradian, Galiullin, Ruf,
IJspeert, Taminiau, Hanson, and Englund]vanDam2019
author author S. B. van Dam, author M. Walsh,
author M. J. Degen, author E. Bersin, author
S. L. Mouradian, author
A. Galiullin, author
M. Ruf, author M. IJspeert, author T. H. Taminiau, author R. Hanson, and author D. R. Englund, title title Optical
coherence of diamond nitrogen-vacancy centers formed by ion implantation and
annealing, https://doi.org/10.1103/physrevb.99.161203 journal journal Physical Review B volume 99, pages 161203 (year
2019)NoStop
[Gottscholl et al.(2020)Gottscholl, Kianinia, Soltamov,
Orlinskii, Mamin, Bradac,
Kasper, Krambrock, Sperlich,
Toth, Aharonovich, and Dyakonov]Gottscholl2020
author author A. Gottscholl, author M. Kianinia, author V. Soltamov,
author S. Orlinskii, author G. Mamin, author
C. Bradac, author C. Kasper, author K. Krambrock, author A. Sperlich, author M. Toth, author I. Aharonovich, and author V. Dyakonov, title title Initialization
and read-out of intrinsic spin defects in a van der Waals crystal at room
temperature, https://doi.org/10.1038/s41563-020-0619-6 journal journal Nature Materials volume 19, pages 540–545 (year
2020)NoStop
[Gottscholl et al.(2021)Gottscholl, Diez, Soltamov, Kasper, Krauße, Sperlich, Kianinia, Bradac, Aharonovich, and Dyakonov]Gottscholl2021
author author A. Gottscholl, author M. Diez,
author V. Soltamov, author C. Kasper, author
D. Krauße, author
A. Sperlich, author
M. Kianinia, author
C. Bradac, author I. Aharonovich, and author V. Dyakonov, title title Spin defects in hBN as promising temperature, pressure and
magnetic field quantum sensors, https://doi.org/10.1038/s41467-021-24725-1 journal journal Nature Communications volume 12, pages 4480 (year 2021)NoStop
[Huang et al.(2022)Huang,
Zhou, Chen, Lu, McLaughlin, Li, Alghamdi, Djugba, Shi, Wang, and Du]Huang2022
author author M. Huang, author J. Zhou,
author D. Chen, author
H. Lu, author N. J. McLaughlin, author S. Li, author M. Alghamdi, author D. Djugba,
author J. Shi, author
H. Wang, and author
C. R. Du, title title Wide field imaging of van der Waals ferromagnet
Fe_3GeTe_2 by spin defects in hexagonal boron nitride, https://doi.org/10.1038/s41467-022-33016-2 journal
journal Nature Communications volume
13, pages 5369 (year 2022)NoStop
[Healey et al.(2022)Healey,
Scholten, Yang, Scott,
Abrahams, Robertson, Hou,
Guo, Rahman, Lu,
Kianinia, Aharonovich, and Tetienne]Healey2022
author author A. J. Healey, author S. C. Scholten, author T. Yang,
author J. A. Scott, author G. J. Abrahams, author
I. O. Robertson, author
X. F. Hou, author Y. F. Guo, author S. Rahman, author Y. Lu, author M. Kianinia, author I. Aharonovich, and author J.-P. Tetienne, title title Quantum
microscopy with van der Waals heterostructures, https://doi.org/10.1038/s41567-022-01815-5 journal journal Nature Physics volume 19, pages 87–91 (year 2022)NoStop
[Kumar et al.(2022)Kumar,
Fabre, Durand, Clua-Provost,
Li, Edgar, Rougemaille,
Coraux, Marie, Renucci,
Robert, Robert-Philip, Gil,
Cassabois, Finco, and Jacques]Kumar2022
author author P. Kumar, author F. Fabre,
author A. Durand, author T. Clua-Provost, author
J. Li, author J. Edgar, author N. Rougemaille, author J. Coraux, author X. Marie, author P. Renucci, author C. Robert, author I. Robert-Philip, author B. Gil, author G. Cassabois, author A. Finco, and author V. Jacques, title title Magnetic imaging with spin
defects in hexagonal boron nitride, https://doi.org/10.1103/physrevapplied.18.l061002 journal
journal Physical Review Applied volume
18, pages L061002 (year 2022)NoStop
[Sasaki et al.(2023)Sasaki,
Nakamura, Gu, Tsukamoto,
Nakaharai, Iwasaki, Watanabe,
Taniguchi, Ogawa, Morita, and Kobayashi]Sasaki2023
author author K. Sasaki, author Y. Nakamura,
author H. Gu, author
M. Tsukamoto, author
S. Nakaharai, author
T. Iwasaki, author K. Watanabe, author T. Taniguchi, author S. Ogawa, author Y. Morita, and author K. Kobayashi, title title Magnetic field imaging by hBN quantum sensor nanoarray, https://doi.org/10.1063/5.0147072 journal journal Applied Physics Letters volume 122, pages 244003 (year 2023)NoStop
[Vuong et al.(2017)Vuong,
Liu, der Lee, Cuscó,
Artús, Michel, Valvin,
Edgar, Cassabois, and Gil]Vuong2017
author author T. Q. P. Vuong, author S. Liu, author A. V. der Lee,
author R. Cuscó, author L. Artús, author
T. Michel, author P. Valvin, author J. H. Edgar, author G. Cassabois, and author B. Gil, title title Isotope engineering
of van der Waals interactions in hexagonal boron nitride, https://doi.org/10.1038/nmat5048 journal journal
Nature Materials volume 17, pages
152–158 (year 2017)NoStop
[Cuscó et al.(2018)Cuscó, Artús, Edgar,
Liu, Cassabois, and Gil]Cusc2018
author author R. Cuscó, author L. Artús, author J. H. Edgar, author S. Liu, author G. Cassabois, and author B. Gil, title
title Isotopic effects on phonon anharmonicity in
layered van der Waals crystals: Isotopically pure hexagonal boron
nitride, https://doi.org/10.1103/physrevb.97.155435 journal journal Physical Review B volume 97, pages 155435 (year
2018)NoStop
[Haykal et al.(2022)Haykal,
Tanos, Minotto, Durand,
Fabre, Li, Edgar,
Ivády, Gali, Michel,
Dréau, Gil, Cassabois, and Jacques]Haykal2022
author author A. Haykal, author R. Tanos,
author N. Minotto, author A. Durand, author
F. Fabre, author J. Li, author J. H. Edgar, author V. Ivády, author A. Gali,
author T. Michel, author A. Dréau, author
B. Gil, author G. Cassabois, and author V. Jacques, title title Decoherence of V_B^- spin defects in monoisotopic
hexagonal boron nitride, https://doi.org/10.1038/s41467-022-31743-0 journal journal Nature Communications volume 13, pages 4347 (year 2022)NoStop
[Janzen et al.(2023)Janzen,
Schutte, Plo, Rousseau,
Michel, Desrat, Valvin,
Jacques, Cassabois, Gil, and Edgar]Janzen2023
author author E. Janzen, author H. Schutte,
author J. Plo, author
A. Rousseau, author
T. Michel, author W. Desrat, author P. Valvin, author V. Jacques, author G. Cassabois, author B. Gil, and author J. H. Edgar, title title
Boron and nitrogen isotope effects on hexagonal boron nitride properties, https://doi.org/10.48550/ARXIV.2306.13358 (year
2023), 10.48550/ARXIV.2306.13358NoStop
[Chen et al.(2020)Chen,
Song, Ravichandran, Zheng,
Chen, Lee, Sun, Li, Gamage, Tian, Ding,
Song, Rai, Wu, Koirala, Schmidt, Watanabe, Lv, Ren, Shi, Cahill,
Taniguchi, Broido, and Chen]Chen2020
author author K. Chen, author B. Song, author N. K. Ravichandran, author Q. Zheng, author
X. Chen, author H. Lee, author H. Sun, author S. Li, author G. A. G. U. Gamage, author F. Tian, author
Z. Ding, author Q. Song, author A. Rai, author H. Wu, author P. Koirala, author
A. J. Schmidt, author
K. Watanabe, author
B. Lv, author Z. Ren, author L. Shi, author D. G. Cahill,
author T. Taniguchi, author D. Broido, and author
G. Chen, title title Ultrahigh thermal conductivity in isotope-enriched cubic
boron nitride, https://doi.org/10.1126/science.aaz6149 journal journal Science volume
367, pages 555–559 (year 2020)NoStop
[Taniguchi et al.()Taniguchi
et al.]TaniguchiXXXX
author author T. Taniguchi et al., @noop note Unpublished
study.Stop
[Gao et al.(2022)Gao,
Vaidya, Li, Ju, Jiang, Xu, Allcca, Shen,
Taniguchi, Watanabe, Bhave,
Chen, Ping, and Li]Gao2022
author author X. Gao, author S. Vaidya,
author K. Li, author
P. Ju, author B. Jiang, author Z. Xu, author A. E. L. Allcca, author K. Shen, author T. Taniguchi,
author K. Watanabe, author S. A. Bhave, author
Y. P. Chen, author
Y. Ping, and author
T. Li, title title Nuclear spin polarization and control in hexagonal boron
nitride, https://doi.org/10.1038/s41563-022-01329-8 journal journal Nature Materials volume 21, pages 1024–1028 (year
2022)NoStop
[Gracheva et al.(2023)Gracheva, Murzakhanov, Mamin, Sadovnikova, Gabbasov, Mokhov, and Gafurov]Gracheva2023
author author I. N. Gracheva, author F. F. Murzakhanov, author G. V. Mamin, author M. A. Sadovnikova, author B. F. Gabbasov, author E. N. Mokhov, and author M. R. Gafurov, title title Symmetry of the
hyperfine and quadrupole interactions of boron vacancies in a hexagonal boron
nitride, https://doi.org/10.1021/acs.jpcc.2c08716 journal journal The Journal of Physical Chemistry C volume 127, pages 3634–3639 (year 2023)NoStop
[Taniguchi and Watanabe(2007)]Taniguchi2007
author author T. Taniguchi and author K. Watanabe, title title Synthesis of
high-purity boron nitride single crystals under high pressure by using
Ba–BN solvent, https://doi.org/10.1016/j.jcrysgro.2006.12.061 journal
journal Journal of Crystal Growth volume
303, pages 525–529 (year 2007)NoStop
[Stenger et al.(2017)Stenger, Schué, Boukhicha,
Berini, Plaçais, Loiseau, and Barjon]Stenger2017
author author I. Stenger, author L. Schué, author M. Boukhicha, author B. Berini,
author B. Plaçais, author A. Loiseau, and author
J. Barjon, title title Low frequency raman spectroscopy of few-atomic-layer thick
hBN crystals, https://doi.org/10.1088/2053-1583/aa77d4
journal journal 2D Materials volume 4, pages 031003 (year
2017)NoStop
[Misonou et al.(2020)Misonou, Sasaki, Ishizu, Monnai, Itoh, and Abe]Misonou2020
author author D. Misonou, author K. Sasaki,
author S. Ishizu, author Y. Monnai, author
K. M. Itoh, and author
E. Abe, title title Construction and operation of a tabletop system for
nanoscale magnetometry with single nitrogen-vacancy centers in diamond, https://doi.org/10.1063/1.5128716 journal journal AIP Advances volume 10, pages 025206 (year 2020)NoStop
[Murzakhanov et al.(2022)Murzakhanov, Mamin, Orlinskii,
Gerstmann, Schmidt, Biktagirov, Aharonovich, Gottscholl,
Sperlich, Dyakonov, and Soltamov]Murzakhanov2022
author author F. F. Murzakhanov, author G. V. Mamin, author S. B. Orlinskii, author U. Gerstmann, author W. G. Schmidt, author T. Biktagirov,
author I. Aharonovich, author A. Gottscholl, author
A. Sperlich, author
V. Dyakonov, and author
V. A. Soltamov, title
title Electron-nuclear coherent coupling and nuclear
spin readout through optically polarized V_B^- spin states in
hBN, https://doi.org/10.1021/acs.nanolett.1c04610 journal journal Nano Letters volume
22, pages 2718–2724 (year 2022)NoStop
[Gu et al.(2023)Gu,
Nakamura, Sasaki, and Kobayashi]Gu2023
author author H. Gu, author Y. Nakamura,
author K. Sasaki, and author K. Kobayashi, title
title Multi-frequency composite pulse sequences for
sensitivity enhancement in hexagonal boron nitride quantum sensor, https://doi.org/10.35848/1882-0786/acd1d1 journal journal Applied Physics Express volume 16, pages 055003 (year 2023)NoStop
[Ru et al.(2023)Ru,
Jiang, Liang, Kenny,
Cai, Lyu, Cernansky,
Zhou, Yang, Watanabe,
Taniguch, Li, Seng,
Liu, Jelezko, Bettiol, and Gao]Shihao2023
author author S. Ru, author Z. Jiang, author H. Liang, author
J. Kenny, author H. Cai, author X. Lyu, author R. Cernansky,
author F. Zhou, author
Y. Yang, author K. Watanabe, author T. Taniguch, author F. Li, author K. T. Seng, author X. Liu, author F. Jelezko,
author A. A. Bettiol, and author W. Gao, title title Robust nuclear spin polarization via
ground-state level anti-crossing of boron vacancy defects in hexagonal boron
nitride, https://doi.org/10.48550/ARXIV.2306.15960 (year 2023), 10.48550/ARXIV.2306.15960NoStop
[Jacques et al.(2009)Jacques, Neumann, Beck, Markham, Twitchen, Meijer, Kaiser, Balasubramanian, Jelezko, and Wrachtrup]Jacques2009
author author V. Jacques, author P. Neumann,
author J. Beck, author
M. Markham, author D. Twitchen, author J. Meijer, author F. Kaiser, author G. Balasubramanian, author F. Jelezko, and author J. Wrachtrup, title title Dynamic polarization of single nuclear spins by optical pumping of
nitrogen-vacancy color centers in diamond at room temperature, https://doi.org/10.1103/physrevlett.102.057403 journal
journal Physical Review Letters volume
102, pages 057403 (year 2009)NoStop
[Broadway et al.(2018)Broadway, Tetienne, Stacey, Wood, Simpson, Hall, and Hollenberg]Broadway2018
author author D. A. Broadway, author J.-P. Tetienne, author A. Stacey,
author J. D. A. Wood, author D. A. Simpson, author
L. T. Hall, and author
L. C. L. Hollenberg, title
title Quantum probe hyperpolarisation of molecular
nuclear spins, https://doi.org/10.1038/s41467-018-03578-1
journal journal Nature Communications volume 9, pages 1246 (year
2018)NoStop
[Jannin et al.(2019)Jannin,
Dumez, Giraudeau, and Kurzbach]Jannin2019
author author S. Jannin, author J.-N. Dumez,
author P. Giraudeau, and author D. Kurzbach, title title Application and methodology of
dissolution dynamic nuclear polarization in physical, chemical and biological
contexts, https://doi.org/10.1016/j.jmr.2019.06.001 journal journal Journal of Magnetic Resonance volume 305, pages 41–50 (year
2019)NoStop
[Mendelson et al.(2020)Mendelson, Chugh, Reimers, Cheng, Gottscholl, Long, Mellor, Zettl, Dyakonov, Beton, Novikov, Jagadish, Tan, Ford, Toth, Bradac, and Aharonovich]Mendelson2020
author author N. Mendelson, author D. Chugh,
author J. R. Reimers, author T. S. Cheng, author
A. Gottscholl, author
H. Long, author C. J. Mellor, author A. Zettl, author V. Dyakonov, author P. H. Beton, author S. V. Novikov, author C. Jagadish,
author H. H. Tan, author M. J. Ford, author
M. Toth, author C. Bradac, and author I. Aharonovich, title title Identifying carbon as the source of visible single-photon emission
from hexagonal boron nitride, https://doi.org/10.1038/s41563-020-00850-y journal journal Nature Materials volume 20, pages 321–328 (year 2020)NoStop
[Chejanovsky et al.(2021)Chejanovsky, Mukherjee, Geng, Chen, Kim, Denisenko, Finkler, Taniguchi, Watanabe, Dasari, Auburger, Gali, Smet, and Wrachtrup]Chejanovsky2021
author author N. Chejanovsky, author A. Mukherjee, author J. Geng,
author Y.-C. Chen, author Y. Kim, author
A. Denisenko, author
A. Finkler, author T. Taniguchi, author K. Watanabe, author D. B. R. Dasari, author P. Auburger, author A. Gali,
author J. H. Smet, and author J. Wrachtrup, title title Single-spin resonance in a van der
Waals embedded paramagnetic defect, https://doi.org/10.1038/s41563-021-00979-4 journal journal Nature Materials volume 20, pages 1079–1084 (year 2021)NoStop
[Stern et al.(2023)Stern,
Gilardoni, Gu, Barker,
Powell, Deng, Follet,
Li, Ramsay, Tan,
Aharonovich, and Atatüre]Stern2023
author author H. L. Stern, author C. M. Gilardoni, author Q. Gu,
author S. E. Barker, author O. Powell, author
X. Deng, author L. Follet, author C. Li, author A. Ramsay, author H. H. Tan,
author I. Aharonovich, and author M. Atatüre, title title A quantum coherent spin in a
two-dimensional material at room temperature, https://doi.org/10.48550/ARXIV.2306.13025 (year 2023), 10.48550/ARXIV.2306.13025NoStop
[Scholten et al.(2023)Scholten, Singh, Healey, Robertson, Haim, Tan, Broadway, Wang, Abe, Ohshima, Kianinia, Reineck, Aharonovich, and Tetienne]Scholten2023
author author S. C. Scholten, author P. Singh,
author A. J. Healey, author I. O. Robertson, author
G. Haim, author C. Tan, author D. A. Broadway, author L. Wang, author H. Abe, author T. Ohshima, author
M. Kianinia, author
P. Reineck, author I. Aharonovich, and author J.-P. Tetienne, title title Multi-species optically addressable spin defects in a van der
Waals material, https://doi.org/10.48550/ARXIV.2306.16600 (year 2023), 10.48550/ARXIV.2306.16600NoStop
§ SPIN HAMILTONIAN
In this section, we explain the spin Hamiltonian.
The spin Hamiltonian of the ground state of a V_B defect would be given as,
Ĥ = Ĥ_ZFS + Ĥ_Ze + Ĥ_Zn + Ĥ_HFI + Ĥ_QI,
Ĥ_ZFS = D Ŝ_z^2
- E_x (Ŝ_xŜ_y + Ŝ_yŜ_x) - E_x (Ŝ_x^2 - Ŝ_y^2),
Ĥ_Ze = γ_e B_0 ·Ŝ,
Ĥ_Zn = ∑_j=1^3 -γ_(j)B_0 ·Î_(j),
Ĥ_HFI = ∑_j=1^3Ŝ A_HFI,(j)Î_(j),
Ĥ_QI = ∑_j=1^3 P_p(j),(j)Î_p(j),(j)^2 + P_z,(j)Î_z,(j)^2 + P_o(j),(j)Î_o(j),(j)^2,
where, z is the direction perpendicular to the hBN plane (the direction of the symmetry axis of the V_B defect), x and y are the in-plane directions, D is the zero-field splitting (ZFS) including the effects of electric field and strain, γ_e = 28 MHz/mT is the gyromagnetic ratio of electron spin, B_0 is the magnetic field vector, E_x and E_y are strain parameters related to local electric field and crystal strain<cit.>, j (=1,2,3) are labels of nearest-neighbor nitrogen sites, γ_(j) is gyromagnetic ratio of nitrogen nuclear spins, A_HFI,(j) is hyperfine interaction (HFI) tensor, Î_k,(j) is nuclear spin operator in the k direction, and P_k,(j) is the nuclear quadrupole moment in the k direction.
Ĥ_ZFS is the ZFS term and Ĥ_Ze is the Zeeman term of the electron spin.
We assume that the strain terms take the same form as the NV center in diamond <cit.>, which has a symmetry close to the V_B defect.
Typical parameter values for V_B defects are D∼3450MHz and E_x,E_y∼ 50MHz <cit.>.
Ĥ_Zn is the Zeeman term of nuclear spin, Ĥ_HFI is the HFI term, and Ĥ_QI is the nuclear quadrupole moment term.
They are based on the form of Ref. Gracheva2023.
p(j) is the direction from the vacancy (electron spin) of the nearest nitrogen site j, and the direction o(j) is the cross product direction of the p(j) and z.
The gyromagnetic ratio is γ_14N = 3.077 kHz/mT for ^14N spin and γ_15N = -4.316 kHz/mT for ^15N spin.
The interaction with boron and remote nitrogen spins other than the nearest-neighbor ones is small and appears as a broadening of the electron spin resonance linewidth<cit.>, so we do not consider its details.
We introduce an approximation that is valid under quantum sensing conditions.
When a magnetic field is applied with sufficient strength in the direction of the symmetry axis (B_0 = B_z e_z), the effect of strain, which degrades the magnetic field sensitivity, can be ignored.
Specifically, this condition is given by B_z ≫ E_x(y)/γ_e.
Except in the vicinity of the ground state level anticrossing (D/γ_e ∼ 125 mT), the Hamiltonian can be approximated as,
Ĥ_ZFS ∼ D Ŝ_z^2
Ĥ_Ze = γ_e B_z Ŝ_z,
Ĥ_HFI ∼Ŝ_z ∑_j=1^3 ( A_zx,(j)Î_x,(j) + A_zy,(j)Î_y,(j) + A_zz,(j)Î_z,(j) ),
where A_zx, A_zy, and A_zz are the elements of HFI tensor.
Within this approximation, the electron spin is quantized in the z direction.
Then, we also introduce an approximation to the nuclear spin terms.
The HFI tensor consists of the dipole interaction and the Fermi contact interaction.
The element of the dipole interaction tensor between electron and nuclear spins is given by,
^dipoleA_αβ = μ_0/4πh γ_e γ_n/r^3 [ 3 (r·e_α)(r·e_β) - e_α·e_β],
where α (= x,y,z) is the direction of the electron spin, β (= x,y,z) is the direction of the nuclear spin, h is Plank constant, r is the position of the nuclear spin with respect to the electron spin.
Since the electron spin is quantized in the z direction, only the α = z term needs to be considered.
Assuming that the electron spin is localized at the vacancy position, r·e_z=0 is satisfied, and we obtain,
^dipoleA_zz = -μ_0/4πh γ_e γ_n/r^3,
^dipoleA_zx = 0,
^dipoleA_zy = 0.
The Fermi contact interaction ^FermiA is a term arising from the overlapping of wave functions of electron and nuclear spins and is zero except for the isotropic component (α = β).
Thus, the HFI term can be approximated as,
Ĥ_HFI ∼Ŝ_z ∑_j=1^3 A_zz,(j)Î_z,(j).
A_zz,(j) and typical line widths of the V_B defects are around 40 MHz or larger.
Under typical experimental conditions, they are an order of magnitude larger than the nuclear spin's Zeeman effect and nuclear quadrupole moment.
Therefore, we neglect nuclear spin terms other than HFI and express the effective spin Hamiltonian as,
Ĥ = D Ŝ_z^2 + γ_e B_z Ŝ_z + Ŝ_z ∑_j=1^3 A_zz,(j)Î_z,(j).
It corresponds to Eq. (1) in the main text and is equivalent to ignoring the nuclear spin's Zeeman effect in the Eq. (8) of the Supplementary Information of Ref. Gao2022.
In this condition, each nitrogen nuclear spin is quantized in the z direction, and energy states according to their total quantum number m_I,tot can be observed.
§ ADDITIONAL DATA OF FIGURE 3 IN THE MAIN TEXT
This section contains additional data related to Fig. 3 in the main text.
Figures <ref>(a) and (b) are enlarged images of Figs. 3(a) and (c) in the main text, respectively.
Based on the fitting results, the signals of each resonance line are decomposed and shown.
The signal of hB^15N [Fig. <ref>(b)] has a simpler spectrum with higher contrast and narrower linewidths overall than conventional case [Fig. <ref>(a)] due to that the number of included resonance lines is small and the separation of each is large.
Slight bias in signal contrast appears as a deviation from fitting.
It is not identify whether the slight bias in the signal contrast, which appears as a deviation from the fitting.
We have not yet identified its cause.
The possible causes are polarization of nuclear spins or frequency dependence of microwave power.
Figure <ref> contains additional data of dynamic nuclear polarization at excited state level anticrossing.
The condition for excited state anticrossing is estimated to be 76 mT from the zero-field splitting of 2130 MHz obtained from the ODMR spectrum of excited state measured at zero field.
We show ODMR spectrum around the condition in Fig. <ref>(a).
We observe a property that biases the spectrum toward the high frequency side around 70 mT.
Figure <ref>(b) shows the ^15N spin polarization estimated by <cit.>,
Polarization = ∑ m_I,tot A_m_I,tot/3/2∑ A_m_I,tot,
where A_m_I,tot is the area of the spectrum belonging to the m_I,tot state, estimated from the product of signal amplitude and line width obtained by fitting each spectrum.
The summation symbols in the denominator and numerator are for the possible m_I,tot states.
Polarization reaches a maximum around 70 mT.
It reaches its maximum around 70 mT, near the condition where polarization increases with increasing laser power [Fig. <ref>(c)].
It is a typical behavior of optical nuclear spin polarization at the excited state level anticrossing <cit.>.
We determine the sign of the HFI parameter of ^15N spin based on this evidence.
The above observations reveal several behaviors that have not been observed before.
First point is the magnetic field condition which achieves maximum polarization.
It differs from the previous works (∼74 mT) <cit.>.
Second and third points are a decrease in signal contrast around 73 mT and polarization sign reversal above 75 mT, respectively.
It is unlikely that the frequency dependence of microwave power is responsible for this since the contrast of the ODMR spectra at the same microwave frequency is very different at different magnetic field conditions.
It remains to be clarified whether this is due to ^15N isotope effects, field misalignment, other defects in the sample, etc.
^15N isotope enrichment may have allowed us to observe such behaviors that could not be observed with the conventional broad anticrossing condition.
We believe that these interesting behaviors will be elucidated in future studies of ODMR spectra and ^15N spin polarization in a wide magnetic field range, including ground state level anticrossing <cit.>.
10
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Dolde et al.(2011)Dolde,
Fedder, Doherty, Nöbauer,
Rempp, Balasubramanian, Wolf,
Reinhard, Hollenberg, Jelezko, and Wrachtrup]Dolde2011
author author F. Dolde, author H. Fedder,
author M. W. Doherty, author T. Nöbauer, author
F. Rempp, author G. Balasubramanian, author T. Wolf, author F. Reinhard, author L. C. L. Hollenberg, author F. Jelezko, and author J. Wrachtrup, title title
Electric-field sensing using single diamond spins, https://doi.org/10.1038/nphys1969 journal journal Nature Physics volume 7, pages 459–463 (year 2011)NoStop
[Mittiga et al.(2018)Mittiga, Hsieh, Zu, Kobrin,
Machado, Bhattacharyya, Rui,
Jarmola, Choi, Budker, and Yao]Mittiga2018
author author T. Mittiga, author S. Hsieh,
author C. Zu, author
B. Kobrin, author F. Machado, author P. Bhattacharyya, author N. Rui, author A. Jarmola, author S. Choi,
author D. Budker, and author N. Yao, title
title Imaging the local charge environment of
nitrogen-vacancy centers in diamond, https://doi.org/10.1103/physrevlett.121.246402 journal
journal Physical Review Letters volume
121, pages 246402 (year 2018)NoStop
[Gottscholl et al.(2020)Gottscholl, Kianinia, Soltamov,
Orlinskii, Mamin, Bradac,
Kasper, Krambrock, Sperlich,
Toth, Aharonovich, and Dyakonov]Gottscholl2020
author author A. Gottscholl, author M. Kianinia, author V. Soltamov,
author S. Orlinskii, author G. Mamin, author
C. Bradac, author C. Kasper, author K. Krambrock, author A. Sperlich, author M. Toth, author I. Aharonovich, and author V. Dyakonov, title title Initialization
and read-out of intrinsic spin defects in a van der Waals crystal at room
temperature, https://doi.org/10.1038/s41563-020-0619-6 journal journal Nature Materials volume 19, pages 540–545 (year
2020)NoStop
[Gu et al.(2023)Gu,
Nakamura, Sasaki, and Kobayashi]Gu2023
author author H. Gu, author Y. Nakamura,
author K. Sasaki, and author K. Kobayashi, title
title Multi-frequency composite pulse sequences for
sensitivity enhancement in hexagonal boron nitride quantum sensor, https://doi.org/10.35848/1882-0786/acd1d1 journal journal Applied Physics Express volume 16, pages 055003 (year 2023)NoStop
[Ivády et al.(2020)Ivády, Barcza, Thiering,
Li, Hamdi, Chou,
Örs Legeza, and Gali]Ivdy2020
author author V. Ivády, author G. Barcza,
author G. Thiering, author S. Li, author
H. Hamdi, author J.-P. Chou, author Örs
Legeza, and author A. Gali, title title Ab initio theory of the
negatively charged boron vacancy qubit in hexagonal boron nitride, https://doi.org/10.1038/s41524-020-0305-x journal journal npj Computational Materials volume 6, pages 41 (year 2020)NoStop
[Gottscholl et al.(2021)Gottscholl, Diez, Soltamov, Kasper, Krauße, Sperlich, Kianinia, Bradac, Aharonovich, and Dyakonov]Gottscholl2021
author author A. Gottscholl, author M. Diez,
author V. Soltamov, author C. Kasper, author
D. Krauße, author
A. Sperlich, author
M. Kianinia, author
C. Bradac, author I. Aharonovich, and author V. Dyakonov, title title Spin defects in hBN as promising temperature, pressure and
magnetic field quantum sensors, https://doi.org/10.1038/s41467-021-24725-1 journal journal Nature Communications volume 12, pages 4480 (year 2021)NoStop
[Gao et al.(2022)Gao,
Vaidya, Li, Ju, Jiang, Xu, Allcca, Shen,
Taniguchi, Watanabe, Bhave,
Chen, Ping, and Li]Gao2022
author author X. Gao, author S. Vaidya,
author K. Li, author
P. Ju, author B. Jiang, author Z. Xu, author A. E. L. Allcca, author K. Shen, author T. Taniguchi,
author K. Watanabe, author S. A. Bhave, author
Y. P. Chen, author
Y. Ping, and author
T. Li, title title Nuclear spin polarization and control in hexagonal boron
nitride, https://doi.org/10.1038/s41563-022-01329-8 journal journal Nature Materials volume 21, pages 1024–1028 (year
2022)NoStop
[Gracheva et al.(2023)Gracheva, Murzakhanov, Mamin, Sadovnikova, Gabbasov, Mokhov, and Gafurov]Gracheva2023
author author I. N. Gracheva, author F. F. Murzakhanov, author G. V. Mamin, author M. A. Sadovnikova, author B. F. Gabbasov, author E. N. Mokhov, and author M. R. Gafurov, title title Symmetry of the
hyperfine and quadrupole interactions of boron vacancies in a hexagonal boron
nitride, https://doi.org/10.1021/acs.jpcc.2c08716 journal journal The Journal of Physical Chemistry C volume 127, pages 3634–3639 (year 2023)NoStop
[Haykal et al.(2022)Haykal,
Tanos, Minotto, Durand,
Fabre, Li, Edgar,
Ivády, Gali, Michel,
Dréau, Gil, Cassabois, and Jacques]Haykal2022
author author A. Haykal, author R. Tanos,
author N. Minotto, author A. Durand, author
F. Fabre, author J. Li, author J. H. Edgar, author V. Ivády, author A. Gali,
author T. Michel, author A. Dréau, author
B. Gil, author G. Cassabois, and author V. Jacques, title title Decoherence of V_B^- spin defects in monoisotopic
hexagonal boron nitride, https://doi.org/10.1038/s41467-022-31743-0 journal journal Nature Communications volume 13, pages 4347 (year 2022)NoStop
[Ru et al.(2023)Ru,
Jiang, Liang, Kenny,
Cai, Lyu, Cernansky,
Zhou, Yang, Watanabe,
Taniguch, Li, Seng,
Liu, Jelezko, Bettiol, and Gao]Shihao2023
author author S. Ru, author Z. Jiang, author H. Liang, author
J. Kenny, author H. Cai, author X. Lyu, author R. Cernansky,
author F. Zhou, author
Y. Yang, author K. Watanabe, author T. Taniguch, author F. Li, author K. T. Seng, author X. Liu, author F. Jelezko,
author A. A. Bettiol, and author W. Gao, title title Robust nuclear spin polarization via
ground-state level anti-crossing of boron vacancy defects in hexagonal boron
nitride, https://doi.org/10.48550/ARXIV.2306.15960 (year 2023), 10.48550/ARXIV.2306.15960NoStop
|
http://arxiv.org/abs/2307.07435v1 | 20230714160048 | PIC simulations of stable surface waves on a subcritical fast magnetosonic shock front | [
"M E Dieckmann",
"C Huete",
"F Cobos",
"A Bret",
"D Folini",
"B Eliasson",
"R Walder"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
Shock mode]PIC simulations of stable surface waves on a subcritical fast magnetosonic shock front
Dept. of Science and Technology (ITN), Linköping University, Campus Norrköping, SE-60174 Norrköping, Sweden
[email protected]
Univ Carlos III Madrid, Grp Mecan Fluidos, Leganes 28911, Spain
Univ Castilla La Mancha, ETSI Ind, Ciudad Real 13071, Spain
Univ Castilla La Mancha, ETSI Ind, Ciudad Real 13071, Spain
Univ Lyon, ENS de Lyon, Univ Lyon 1, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574
F-69230, Saint-Genis-Laval, France
Univ Strathclyde, SUPA, Glasgow G4 0NG, Scotland, UK
Univ Lyon, ENS de Lyon, Univ Lyon 1, CNRS, Centre de Recherche Astrophysique de Lyon UMR5574
F-69230, Saint-Genis-Laval, France
August 2022
We study with particle-in-cell (PIC) simulations the stability of fast magnetosonic shocks. They expand across a collisionless plasma and an orthogonal magnetic field that is aligned with one of the directions resolved by the 2D simulations. The shock speed is 1.6 times the fast magnetosonic speed when it enters a layer with a reduced density of mobile ions, which decreases the shock speed by up to 15% in 1D simulations. In the 2D simulations, the density of mobile ions in the layer varies sinusoidally perpendicularly to the shock normal. We resolve one sine period. This variation only leads to small changes in the shock speed evidencing a restoring force that opposes a shock deformation. As the shock propagates through the layer, the ion density becomes increasingly spatially modulated along the shock front and the magnetic field bulges out where the mobile ion density is lowest. The perturbed shock eventually reaches a steady state. Once it leaves the layer, the perturbations of the ion density and magnetic field oscillate along its front at a frequency close to the lower-hybrid frequency; the shock is mediated by a standing wave composed of obliquely propagating lower-hybrid waves. We perform three 2D simulations with different box lengths along the shock front. The shock front oscillations are aperiodically damped in the smallest box with the fastest variation of the ion density, strongly damped in the intermediate one, and weakly damped in the largest box. The shock front oscillations perturb the magnetic field in a spatial interval that extends by several electron skin depths upstream and downstream of the shock front and could give rise to Whistler waves that propagate along the shock's magnetic field overshoot. Similar waves were observed in hybrid and PIC simulations and by the MMS satellite mission.
Keywords: fast magnetosonic shock, PIC simulations, shock boundary oscillations
[
R. Walder
August 12, 2023
===================
§ INTRODUCTION
Shocks in collisionless plasma, in which effects due to Coulomb collisions between charged particles are negligible compared to collective electromagnetic forces, have been studied in the laboratory <cit.>, in the Solar system <cit.>, and by means of numerical simulations <cit.>. Shocks are important structures for the dissipation of energy in collisionless plasma (See <cit.> for a recent review). We consider here perpendicular nonrelativistic fast magnetosonic (FMS) shocks, for which the magnetic field upstream of the shock is oriented orthogonally to its normal and is amplified by the shock crossing. In the reference frame of the shock, the plasma is slowed down, heated, and compressed by the shock crossing.
Fast magnetosonic shocks are categorized according to how the shock speed in the upstream frame of reference compares to the speed of the magnetohydrodynamic FMS wave. If the shock speed is less than 2.7 times the FMS speed <cit.>, the shock is subcritical and its electric cross-shock potential can slow down the plasma to a speed below the FMS speed. Another estimate <cit.> sets this number to a value below 2.7. If the shock is faster, the particle distributions around the shock become nonthermal <cit.>. Such distributions, which can involve for example particle beams or an anisotropic temperature, give rise to instabilities that modify or destroy the shock. Perpendicular subcritical shocks are stationary in time in their rest frame and they have a thin transition layer, which enables accurate measurements of their position.
The Magnetospheric Multiscale Spacecraft (MMS) mission detected ripples on the surface of the Earth's bow shock <cit.>. Surface waves on plasma boundaries <cit.> can be stimulated by perturbations. Perturbations develop out of drift instabilities <cit.> if plasma flows along a stationary boundary like a tangential discontinuity. Perturbations can also grow near shock surfaces. Shock-reflected ions can move far upstream of the shock and drive waves <cit.> letting shocks propagate into an upstream medium with a spatially nonuniform density and magnetic field direction. Lowe and Burgess <cit.> found boundary waves on the shock in their two-dimensional hybrid simulation, which used a kinetic approximation for ions and described electrons with an inertialess fluid. These waves were distributed over a wide wavenumber interval and followed the dispersion relation of the Alfvén wave <cit.>. Burgess and his coworkers extended the simulation domain to three dimensions and examined the interplay of the ripples with ion-driven instabilities and the therefrom resulting waves <cit.>.
Hybrid codes approximate electrons by an inertialess fluid. Resolving their full dynamics is more expensive but it allows particle-in-cell (PIC) simulations to model high-frequency processes, where electron dynamics matters. Several PIC simulation studies have addressed the interplay of subcritical <cit.> and supercritical collisionless shocks <cit.> with the shock-modified upstream plasma. These studies covered a wide range of magnetic field strengths and orientations relative to the shock normal. Alfvénic shock ripples accelerated electrons and led to the growth of high-frequency waves like Whistlers. The plasma was not fully thermalized after its passage through the shock and relaxed through secondary instabilities. On ion gyro-scales, the shock was immersed in a transition layer where many waves and plasma structures were coupled across different spatial and temporal scales.
The aforementioned simulations and the pioneering hybrid simulation in Ref. <cit.> suggest that stable, oscillatory shock surface modes exist in collisionless plasma that can be excited by perturbations of the upstream plasma and travel along the shock surface. In this context, the term stable means that once these surface waves have been excited, their amplitude remains constant or decreases.
The fundamental requirement for the existence of a stable surface wave is the presence of a restoring force, which serves to counteract the effects of perturbations. In the context of gas dynamics shocks, the mode is an acoustic one and the restoring force is provided by a combination of tangential velocity conservation and the unbalanced pressure field, leading to the formation of oscillating shock ripple patterns. For an ideal gas, the amplitude of these oscillations decreases over time proportional to t^-3/2, or t^-1/2 in the strong-shock limit, as demonstrated by the works of Roberts <cit.>, Freeman <cit.>, and Zaidel <cit.>. The stability limits, beyond which the shock perturbation may experience non-decaying behavior or even exponential growth, have been extensively studied and documented, departing from the pioneering works of D'yakov <cit.> and Kontorovich <cit.> and continuing to high-energy-density conditions <cit.>. The extension of these results to magnetohydrodynamics may not be trivial since additional magnetic restoring forces may appear. The first work on the stability limits of FMS shock in ideal conditions, characterized by an ideal gas equation of state and a perfectly conducting gas, was performed by Gardner <cit.>. However, further research is necessary to fully understand the transient evolution of these perturbations. If FMS shock perturbations are damped and the damping rate is lower than their oscillation frequency, they have the potential to generate surface waves, as has been observed at the Earth's bow shock and in PIC and hybrid simulations.
In the previous hybrid- and PIC simulations, the upstream perturbations were driven primarily by the shock-reflected ion beam. In such a setting, the driver of the shock boundary oscillations cannot be separated from the shock boundary, which is characterized by an overshoot of the magnetic amplitude and density over its values downstream of the shock. This separation is necessary if we want to compare the oscillations of the shock boundary to shock modes in (magneto-)hydrodynamic models, which are usually taken to be monochromatic in wavenumber space. We can decouple the driver from the shock boundary by perturbing it once and letting it relax.
We study the evolution of a perturbation of the shock front with one- and two-dimensional particle-in-cell (PIC) simulations. We align the uniform magnetic field of the upstream plasma with the direction of the 2D simulation that is perpendicular to the shock normal. The magnetic field direction is unresolved in the 1D simulations. A thermal pressure gradient drives an initially planar subcritical FMS shock. We follow it through a spatially uniform magnetized ambient plasma with a ratio between the electron thermal pressure and the magnetic pressure ≈ 0.55. The shock propagates across a perturbation layer with a limited extent along the shock propagation direction. The number density of mobile electrons in this layer equals that of the surrounding plasma. The positive charge density in the perturbation layer is subdivided into two components. The number density of mobile ions is constant along the shock propagation direction and is equal to or less than that of the surrounding ambient plasma. An immobile positive charge cloud cancels out the negative net charge.
In the perturbation layer of the 2D simulations, the number density of the mobile ions varies in the direction perpendicular to the shock normal. By selecting a sinusoidally varying perturbation, we isolate a single surface mode. We obtain the following results.
Shocks in the 1D simulations propagate at a lower speed through the perturbation layer and regain their initial speed after they leave it. Based on this finding and the structure of the perturbation layer in the 2D simulations, the position of the shock front in the direction of the average shock normal should vary sinusoidally along the front and its amplitude should grow with time for as long as the shock moves through the perturbation layer. The amplitude does, however, saturate after an initial growth phase and remains constant after that. The density at the shock overshoot and the magnetic field direction also become functions of the position along the shock front. Once the shock leaves the perturbation interval and enters the spatially uniform upstream plasma, the spatial, density- and magnetic field perturbations perform oscillations around their equilibrium values. Their oscillation frequency is just below the lower-hybrid frequency and involve also the shock mode, which separates the upstream from the downstream plasma. The shock mode and the surface wave thus form an oblique FMS mode near its resonance frequency where it becomes quasi-electrostatic.
Our paper is structured as follows. Section 2 summarizes the numerical scheme used by the PIC code, the initial conditions of our simulations, the FMS mode and its coupling to lower-hybrid waves, and results from 1D simulations. Section 3 presents results from 2D simulations and Section 4 discusses our findings and their implications.
§ THE SIMULATION CODE AND ITS INITIAL CONDITIONS
§.§ The simulation code EPOCH
PIC codes approximate each plasma species i by resolving its velocity distribution function with a cloud of computational particles (CPs). Each CP j of species i is characterized by a charge Q_j and mass M_j with a value Q_j/M_j, which matches the charge-to-mass ratio Q_i/M_i of a particle of species i. Every CP has a position 𝐱_j and velocity 𝐯_j and, hence, a spatially localized current density ∝ Q_j 𝐯_j. The summation of the current density contributions of all CPs yields the macroscopic current density 𝐉(𝐱,t). The electric field 𝐄(𝐱,t), magnetic field 𝐁(𝐱,t) and current density 𝐉(𝐱,t) are defined on a numerical grid and the latter updates 𝐄(𝐱,t) and 𝐁(𝐱,t) according to discretized forms of Ampère's law
∇×𝐁 - 1/c^2∂𝐄/∂ t = μ_0 𝐉,
where μ_0, c are the vacuum permeability and speed of light, and Faraday's law
∇×𝐄 + ∂𝐁/∂ t = 0.
The EPOCH code fulfills ∇·𝐁=0 and Gauss' law ∇·𝐄=ρ/ϵ_0 (ρ,ϵ_0: charge density and vacuum electric permittivity) to round-off precision <cit.>. Once the electromagnetic fields have been updated, they are interpolated to the position of each CP and update its momentum according to the relativistic Lorentz equation. All particle velocity components and, hence, all components of 𝐄, 𝐁, and 𝐉 are updated also in simulations that resolve fewer than 3 spatial dimensions. This computational cycle is repeated for as many time steps Δ_t as necessary to cover the time scale of interest. More details of the numerical scheme of the EPOCH code are given elsewhere <cit.>.
§.§ Initial conditions
All simulations cover the interval -L_x/2 ≤ x ≤ L_x/2. The 2D simulations also resolve an interval 0≤ y ≤ L_y. Values of L_x and L_y vary between simulations. Boundary conditions are periodic in all directions. We model fully ionized nitrogen at the correct ion-to-electron mass ratio because it is widely used in laser-plasma experiments, for example in Ref. <cit.>. Figure <ref> sketches the initial number density distribution of the ions along x.
In the ambient plasma, we set the electron number density to n_e0 and that of the ions to n_i0=n_e0/7. The electron plasma frequency ω_pe=(n_e0 e^2/ϵ_0 m_e)^1/2 (e, m_e, c: elementary charge, electron mass, and light speed) sets the electron skin depth λ_e = c/ω_pe, with which we normalize space. At the time t=0, the electric and magnetic fields are set to 𝐄=(0, 0, 0) and 𝐁=(0,B_0,0) with eB_0/m_e ω_pe=0.084. All ions have a temperature 200 eV. Electrons outside (inside) the dense cloud have a temperature 1000 eV (1500 eV). Thermal diffusion lets more electrons stream from the dense into the ambient plasma than in the opposite direction, which yields an ambipolar electric field that points from the dense to the diluted plasma. This electric field lets the ions of the dense plasma expand into the ambient plasma. It also accelerates ambient electrons into the dense cloud. The accelerated ambient electrons form a beam that interacts with the electrons of the dense cloud. We give the latter a higher temperature in order to reduce the effects of a two-stream instability between both electron populations.
The temperature T_e = 1000 eV of the electrons in the ambient plasma sets their thermal speed v_te=(k_BT_e/m_e)^1/2 (k_B: Boltzmann constant). The ion temperature T_i = 200 eV and T_e set the ion acoustic speed c_s in the ambient plasma. On the time scales of ion-acoustic oscillations, electrons have three degrees of freedom and ions have one. This gives the adiabatic constants γ_e = 5/3 for electrons and γ_i = 3 for ions. The ion-acoustic speed becomes c_s =(k_B(7 γ_e T_e + γ_i T_i)/m_i)^1/2. The Alfvén speed v_A = B_0/(μ_0 n_i0 m_i)^1/2 and c_s define the FMS speed v_fms =(c_s^2+v_A^2)^1/2. Our plasma has the electron plasma beta β = n_e0k_BT_e/(B_0^2/2μ_0)=0.55. Relevant plasma parameters in the uniform ambient plasma and their values are listed in Table <ref>.
Although we selected parameters in our simulation setup, which are representative of some laser-plasma experiments where physical scales matter, we use normalized units scaled to the electron skin depth λ_e, the electron plasma frequency ω_pe, and the speed of light c.
We consider here FMS shocks that propagate through the ambient plasma. Wave dispersive properties of FMS shocks are set by the plasma conditions in the downstream plasma. However, the density and magnetic field amplitude do not vary much between the upstream (ambient) plasma and downstream plasma of a subcritical FMS shock. We discuss wave properties in the ambient plasma and assume that they are also representative of the downstream plasma.
The FMS mode is not dispersive for low wavenumbers k. Since shocks form by a steepening of the wavefront, the wavenumbers of the waves that form the shock increase in time. Eventually, they reach values where the FMS mode becomes dispersive. Simplified expressions for the dispersion relation of FMS waves in collisionless plasma can be obtained by either neglecting space charge, which gives ω_EM(k) = v_fmsk, or by taking the electrostatic limit where the electric field is tied to oscillations of the charge density. The latter gives the lower-hybrid mode ω_ES(k) = (3v_ti^2k^2 + ω_pi^2(ω_ce^2+v_te^2k^2)/(ω_pe^2+ω_ce^2+v_te^2k^2))^1/2 with the ion thermal speed v_ti=(k_BT_i/m_i)^1/2.
We use the noise distribution of the PIC code to track the full wave branch and show how it goes over into limits ω_EM(k) and ω_ES(k). Charge- and current-density fluctuations due to the moving CPs yield electric and magnetic field fluctuations. These fluctuations will have values of k and ω, which are connected to the particle motion. If CPs create fluctuations with such values of k,ω, they can often also absorb them. Hence, strong fluctuations of the electric and magnetic fields tend to reveal locations in k,ω-space where waves resonate with particles. They are strongest close to eigenmodes of the plasma (See <cit.> for a related discussion). The FMS mode compresses the background magnetic field and we can use fluctuations of B_y to track FMS modes in k,ω-space. In our 1D geometry, fluctuations in E_x are always tied to electrostatic charge density fluctuations and we use E_x to identify the lower-hybrid mode in k,ω-space.
Figure <ref> shows the power spectra of the fluctuations in the magnetic B_y and electric E_x components. We computed them by running a 1D PIC simulation that resolved x with the plasma parameters of the ambient plasma discussed above. We Fourier-transformed B_y(x,t) and E_x(x,t) over space and time and multiplied the result with its complex conjugate giving |B_y(k,ω)|^2 and |E_x(k,ω)|^2.
The noise at low wavenumbers k is magnetic and follows ω_EM. At large k, the electric field noise maps out lower-hybrid waves. The waves that connect ω_EM with ω_ES at k ≈ 1 have significant electric and magnetic components. They are thus not represented correctly by either limit.
Shocks form in our simulations just before the expanding ions reach the perturbation layer. We change the number density of mobile ions in this interval in Fig. <ref> and keep that of mobile electrons unchanged. Since we set 𝐄= 0 at t=0, Gauss' law ∇·𝐄= ρ/ϵ_0 implies initially a zero total plasma charge density ρ = 0 everywhere, which places an immobile positive charge density in our perturbation layer. Its electric field cancels out the one caused by the jump in the number density of mobile ions.
The 1D simulations and the 2D simulations 1 and 2 resolve L_x = 160 with 8000 grid cells and end at the time t_sim=14500. The 2D simulation 3 uses 9000 grid cells to resolve L_x,3=180 and follows the shock for the time t_sim,3=1.25t_sim. The 2D simulations 1-3 resolve L_y,1=12 by 600 cells, L_y,2=24 by 1200 cells, and L_y,3 =36 by 1800 cells. The data is averaged over 4 cells in the 1D simulations and over patches of 2× 2 cells in the 2D simulations. In the 2D (1D) simulations, ions and electrons are resolved by 25 CPs (375 CPs) per cell each.
§.§ One-dimensional simulations
We test with two 1D simulations how shocks react to changes in the number density of mobile ions in the perturbation layer. One 1D simulation represents 0.7n_i0 by mobile ions and the other 1D simulation represents 0.4n_i0 by mobile ions. According to Fig. <ref>, one shock will be launched at the density jump to the right of the dense cloud and a second one at the density jump to its left. Both will propagate away from the dense cloud and into the ambient plasma. Only the shock that moves to the right propagates through the perturbation layer. The left-moving shock in one of the simulations is taken as the reference shock. We invert the sign of the position x and velocity v_x in our plots so that we can compare the ion phase space density distribution of the left-moving unperturbed shock with that of the right-moving perturbed shock.
Figure <ref> shows ion phase space density distributions. The supplementary movie 1 animates their evolution in time until t_sim=14500.
At t_1=2000, a localized oscillation of the ion phase space density distribution is growing at x≈ 8. We may interpret it as a steepening wave. It breaks shortly after this time and changes into a shock <cit.>. The magnetic field of the shock traps the electrons ahead of it and pushes them upstream. The current of the moving electrons and their space charge induces an electric field. It accelerates the practically unmagnetized ions with ω_cit_sim≪ 2π ahead of the shock until their current balances the electronic one, giving rise to the shock foot. The change in the ion number density near the boundary x=8.9 of the perturbation layer in Figs. <ref>(b, c) is compensated by a faster motion of the ions in the perturbed interval.
At t_2=5000 or ω_lht_2=2.2π, qualitative differences can be observed between the three shocks. Most downstream ions in the interval 12 ≤ x ≤ 16 in Fig. <ref>(d) move at a speed below v_fms while the mean speed of those in Fig. <ref>(e) is about v_fms. The downstream ions in Fig. <ref>(f) are confined to a smaller interval 13 ≤ x ≤ 15 and most ions move faster than v_fms. Despite its faster downstream ions, the shock at x=15 in Fig. <ref>(f) is trailing those in Figs. <ref>(d, e). Fewer ions, which cross the shock, reduce the thermal pressure behind it and, hence, the shock speed in the downstream frame. Figure <ref>(f) also shows that the slowdown of the perturbed shock decreased the speed of the reflected ions. The ions at x≈ 20, which were reflected by the shock when it just entered the perturbation layer, have a speed ≈ 3v_fms. Those at x≈ 17, which were reflected after the slowdown of the shock, reach a speed of only 2.2v_fms.
In Fig. <ref>(d), the mean velocity of the ions is approximately constant for 9.5 ≤ x≤ 16. The ion density change at x≈ 11 is caused by a tangential discontinuity, which balances the high thermal pressure of the blast shell plasma against the combined thermal and magnetic pressure of the shocked ambient plasma. As the shocks in Figs. <ref>(e, f) move into the perturbed interval, the pressure ahead of the tangential discontinuity decreases. Blast shell ions accelerate for 10 ≤ x ≤ 11 in Figs. <ref>(e, f) in the ambipolar electric field of the blast shell's density gradient and decelerate at larger x, transferring their momentum to the downstream plasma.
Figures <ref>(d, e, f) show a structure in the shock-reflected ion beam at x≈ 21 and v_x/v_fms≈ 2.8. It develops when the shock-reflected ion beam, which has not yet developed at the snapshots for t_1=2000, catches up with the ions at the front of the rarefaction wave that is visible in Figs. <ref>(a, b, c) at high speeds for x>8 (See supplementary movie 1).
Figure <ref> shows the ion phase space density distributions at later times. We also plot the distributions of the ion densities that correspond to those of the phase space density.
At t_3=7000 (ω_lht_3 =3.1π), the shocks are about to leave the perturbation layer. As before, the mean velocity of the downstream ions increases, and the shock speed in their rest frame decreases with a decreasing number density of the mobile ions ahead of the shock. A substantial lag is observed in particular for the shock in Fig. <ref>(c). The ion densities for this time are plotted in Fig. <ref>(d). In the overshoot of the unperturbed shock, the ion density reaches the value 3. It decreases to just below 2 downstream of the shock. The shocks, which move through the perturbation layer, have a lower density of their downstream plasma, and the ion density at their overshoot reaches about 3 times that of the mobile ions in the perturbation layer. At t_sim=14500 (ω_lht_sim =6.4π), the shock speed and the mean velocity of the downstream ions behind the shocks are similar. The ion phase space vortices <cit.> in Fig. <ref>(f, g) have been separated from the shock by the influx of shocked upstream plasma. The density distributions near the overshoots in Fig. <ref>(h) are similar for all shocks apart from a displacement along x.
Figure <ref> quantifies the impact of the changed density of mobile ions on the shock speed and position. We identify the position, where the shock is located, as the one with the largest curvature of the ion density and track it over time. We averaged the ion density over several grid cells to decrease statistical fluctuations of the ion density, which reduces the accuracy with which we can determine the spatial position of the shock's overshoot. The method, with which we determined the shock position, does also not always find the exact position of the shock in particular in the perturbation layer with the density of mobile ions 0.4n_io, which created several outliers in the data. Inaccuracies will be visible in particular in the velocity data that is obtained by differentiating the noisy position data. Nevertheless, the computed curves reveal trends that are confirmed by the supplementary movie 1. Figure <ref>(a) plots the separation of the reference shock, which moves through the unperturbed plasma, from the two shocks that move through the perturbed plasma. We start plotting the separation after t=2500 when the shock fully reformed in Figs. <ref>(a-c).
The shock moving through the perturbation layer with the mobile ion density 0.7n_i0 falls steadily behind the unperturbed shock until t≈ 8000 when Fig. <ref>(b) shows that it crosses the upper boundary x=20.8 of the perturbed interval. The shock, which moves through the plasma with the mobile ion density 0.4n_i0, leaves the perturbation layer last. Hence it is slowed down until t≈ 10^4. Figure <ref>(b) obtains the shock speed from the change of its position over a time interval ≈ 640. The speeds of the shocks remain above 1.3 v_fms for all times. Given that the FMS wave with frequencies just below ω_lh is dispersive and that its phase speed is well below v_fms at wave numbers k≈ 1 in Fig. <ref>, the Mach numbers are higher.
§ TWO-DIMENSIONAL SIMULATIONS
§.§ A comparison of the boundary oscillations in the 2D simulations
In what follows, the term "simulation j" with 1 ≤ j ≤ 3 refers to the 2D simulation with the length L_y,j along y. We express the number density of mobile ions n_i(x,y) in units of n_i0 and the in-plane magnetic field is B(x,y)=(B_x^2+B_y^2)^1/2/B_0. The magnetic B_z component remains at noise levels. The perturbation layer covers again 8.9 ≤ x ≤ 20.8. Within the perturbation layer of simulation j, the density of mobile ions varies as n_i(x,y) = 0.7+0.3sin(2π y / L_y,j) for 0 ≤ y ≤ L_y,j. We take the shock that moves in the direction of decreasing x<0 as the unperturbed reference shock.
Figure <ref> shows n_i(x,y) and B(x,y) at the time t=7000, when the perturbed shocks in the three simulations have left the perturbation layer and entered the spatially uniform ambient plasma. Figures <ref>(a, c, e) show in each simulation j a localized peak of the ion density at x≈ 22. All shocks lag behind the reference shock. The shape of the density peak varies between the simulations. With respect to the upstream region, the shock front is concave near y=10 in simulation 3 and follows the isocontour of the magnetic amplitude. It is almost planar at y≈ 5 in simulation 2 and convex at y≈ 3 in simulation 1. We also observe a structure with a reduced ion density, which is a signature of an ion phase space vortex like the ones shown in Fig. <ref>, at x≈ 16 and a non-planar front of the dense blast shell for x≈ 14.
Figures <ref>(b, d, f) show that the distributions of B(x,y) react to the density modulations of the shock fronts. The varying density of mobile ions in the perturbation layer affected the balance between the thermal- and magnetic pressures downstream of the shock and the ram pressure of the upstream ions. The magnetic field expanded in the upstream direction in intervals with a low density of mobile ions. We observe bent magnetic field lines at the shock and around the ion phase space vortex. The curves x_c,j(y)=22.5-A_jsin(2π y/L_y,j) with A_j/L_y,j= 0.021 follow the modulation of the front of B(x,y).
The supplementary movie 2 (simulation 1), movie 3 (simulation 2), and movie 4 (simulation 3) show the evolution of the shocks in the perturbation layer. It demonstrates that the density and magnetic field perturbations grow, saturate, and remain stationary until they leave it. This differs from what we observed in Fig. <ref>(a) where the distance between the reference shock and the perturbed shock steadily increased until the shock left the perturbation layer.
The density oscillation along y and around x≈ 22 in Fig. <ref>(a, c, e) induces an electric field that points from the dense to the dilute plasma and accelerates ions. It acts as a standing surface wave with a wavenumber k_y,j=2π/L_y,j. Electrons can move freely along the magnetic field and the perturbation could thus oscillate in the ion-acoustic mode with the frequency ω_cs,j = c_s k_y,j. If it does, it will oscillate with ω_cs,1≈ 0.36ω_lh, ω_cs,2=0.24ω_lh, and ω_cs,3=0.12ω_lh in simulations 1 to 3. The density perturbation along y is also coupled to the shock with its average normal along x. The upstream plasma that crosses the shock is compressed by a FMS mode with a frequency close to ω_lh. The large difference between the ion-acoustic frequency and ω_lh allows us to distinguish between both.
For every data time step, we determine the maximum value of the ion density along x as a function of y, which gives us a density distribution n_i,max(y,t). We also determine for each y the location where we reach B(x,y)=1.67 first coming from the upstream region, which gives us the evolution in time of a curve x_B(y,t) that is similar to the solid curve in Figs. <ref>(b, d, f). The imaginary part of the fundamental frequency of the Fourier transform over space of x_B(y,t) and n_i,max(y,t) gives us the amplitudes x_B(k_y,j,t) and n_i, max(k_y,j,t). Figure <ref> plots x_B(k_y,j,t)/L_y,j and n_i,max(k_y,j,t)/L_y,j.
In simulation 1 x_B(k_y,j,t) and n_i,max(k_y,j,t) are strongly damped. Both curves reach extrema at around t=1.1× 10^4 and converge to a steady state after that. Weaker damping in simulation 2 yields pronounced extrema at t≈ 10^4 and weaker ones at t≈ 1.3 × 10^4. Both curves in simulation 3 oscillate in antiphase with the period ≈ 6000. The oscillation frequency is lowest in simulation 1 and it is similar in simulations 2 and 3, which is not what we expect from ion-acoustic oscillations with ω_cs,1=2ω_cs,2 and ω_cs,1=3ω_cs,3. Simulation 3 gives us the oscillation frequency 0.75ω_lh or 6.25ω_cs,3.
Lower-hybrid waves have been analyzed in uniformly magnetized plasma <cit.>. They trap electrons magnetically and the trapped electrons confine the ions electrically. Lower-hybrid waves require that the trapped electrons do not move far along the magnetic field during 2πω_lh^-1. Their wavevector, which is aligned with the density gradient, therefore has to be almost perpendicular to the magnetic field. The perturbation in our 2D simulation rotates the magnetic field direction and the density gradient differently and both will not remain mutually orthogonal. This rotation sets an upper limit on the amplitude of the spatial oscillations of the magnetic field and, hence, on the perturbation amplitude in our 2D simulations.
The angle θ between the wavevector of an undamped lower-hybrid mode and the magnetic field is limited to the range |90^∘ - θ | ≲θ_max. In a plasma with equal temperatures of electrons and ions, the angular range can be estimated with cos^2θ_max=m_e/m_i, which yields θ_max≈ 0.4^∘ for nitrogen ions.
The perturbed shock propagates into an upstream plasma, which is spatially homogenous and does therefore not favor a specific propagation direction of the boundary oscillation. Hence, the surface wave and the mode that compresses the upstream plasma constitute a standing wave, which is composed of modes that propagate obliquely to the shock normal and in opposite directions. We can estimate the propagation angle θ_j of these modes relative to the background magnetic field as follows. According to the ion velocity distribution in the interval 37 ≤ x ≤ 41 in Fig. <ref>(e), the wavelength of the lower-hybrid mode, which sustains the shock and forms the component of the wave that points along its normal, is k_x,s≈π. The wavelength of the surface wave that moves along the shock boundary is k_y,j=2π/L_y,j. The propagation angles θ_j = k_x,s/k_y,j are thus θ_1 = 80.5^∘, θ_2 = 85.2^∘, and θ_3 = 86.8^∘.
Shock modes do not have to be undamped as they are continuously fed with energy from the inflowing upstream plasma. The wave vector of the shock mode may however be rotated into a direction with less damping. Another aspect is that lower-hybrid waves with frequencies ω≈ω_lh and with θ = 90^∘ are dispersive <cit.>. If their phase velocity also changes with θ, we obtain an undamped surface wave only if the spread in wavenumbers along k_x and k_y is small. The higher resolution of k_y by simulation 3 allows the surface wave to involve wave modes with similar frequencies. Given that the shock fronts involve more than one wave mode, different spectral resolutions may also explain why their shapes differ in Figs. <ref>(a, c, e).
In addition to the requirement that lower-hybrid waves can only mediate a shock if they propagate almost perpendicularly to the background magnetic field, magnetic tension (𝐁·∇ ) 𝐁/μ_0 is likely to contribute to the saturation of the perturbation. In simulation 3 and at the time depicted in Fig. <ref>(f), its magnitude is comparable to that of the magnetic pressure gradient force density ∇𝐁^2/2μ_0 at x≈ 22 and y≈ 9 and about 20% of the thermal pressure gradient force density at the front of the shock in Fig. <ref>(e) at the same position (not shown). As we will see, the magnetic tension builds up over a much wider x-interval than the other forces and its impact on the saturation of the shock deformation in the perturbation layer is thus difficult to quantify.
§.§ Damping of the shock oscillations in simulation 1 and 2
We examine the distributions of the ion density and magnetic field amplitude at two times. Figure <ref> shows them for simulations 1 and 2 at the time t=10^4, when the curves for simulation 2 went through their extrema in Fig. <ref>.
The perturbation of the shock front in simulation 1, which is shown in Figs. <ref>(a, b), has damped out. The front is planar and the density of the overshoot and the magnetic amplitude does not vary with y. The front is located one electron skin depth behind that of the reference shock and P_bx, which quantifies the deformation of the magnetic field in the simulation plane, is at noise levels. The perturbation of the shock front in simulation 2 has the opposite phase than the initial one in Fig. <ref>(c, d); it is oscillating. In simulation 2, P_bx evidences a deformation of the magnetic field, which results in magnetic tension, in an interval with the width 3 centered on the shock.
Figure <ref> shows that at t=t_sim, the shock perturbation has also damped out in simulation 2. The density distribution of simulation 1 in Fig. <ref>(a) shows another density maximum at x≈ 40. Its separation from the leading density maximum at x≈ 42 reveals that the wavelength of the lower-hybrid mode, which mediates the shock, is 2 and k_x,s=π as we observed in the 1D simulations.
The magnetic field near the shocks in both simulations is aligned with y at this time. The lag between the perturbed and unperturbed shocks in Figs. <ref>(c, f) is unchanged compared to that in Figs. <ref>(c, f).
§.§ Weakly damped oscillations in simulation 3
Figure <ref> shows the ion density and magnetic field distributions at the times t=1.1 × 10^4 and 1.66 × 10^4, when the curves n_i(k_y,3,t) and x_B(k_y,3,t) go through extrema in Fig. <ref>.
The oscillations of the shock fronts are in phase at the times t=1.1 × 10^4 and t=1.66 × 10^4 in simulation 3. Figure <ref>(a) reveals a density maximum at x≈ 33 and y>18, which is concave with respect to the upstream region. A weak density enhancement near y≈ 8 bulges out into the upstream region. The position of the shock front is thus also a function of y and the variation is sinusoidal. The density oscillation is still present in Fig. <ref>(d). The front is almost flat, which indicates that some of the waves that sustained the shock have subsided. The in-plane magnetic field B(x,y) in Figs. <ref>(b, e) is deformed near the shock front. It expands upstream in intervals, where the shock density is low. The supplementary movie 4 confirms that the shock front performs sinusoidal oscillations around its mean position along x, which are synchronized with those in the ion density along the shock front. The ion density oscillations are correlated with those of B(x,y).
The curves in Fig. <ref>(c, f) show that the average ion density downstream of the shock is about 2; this shock does not compress the plasma much because the shock speed in the downstream frame is not small compared to the relative speed between the downstream and upstream plasma. According to P_bx, deformations of B(x,y) are strongest near the shock front. They extend upstream and downstream and decrease exponentially with distance from the maximum. Note that the curve has its maximum behind the front because the strong magnetic field does not fill out the simulation box for all values of y at larger x. The peaks in the ion density have a much smaller spread along x than their magnetic counterparts; charge density oscillations are shielded on Debye length scales and magnetic ones on skin depth scales.
Figure <ref> shows the evolution in time of the y-averaged density and field distributions in simulation 3. We have transformed these distributions into a reference frame that moves with the speed v_s=1.6v_fms in the direction of the perturbed shock. Figure <ref>(a) shows that the shock is located at x̃=x-v_st≈ 3.5. The position of its front oscillates in time.
Its evolution is determined by a simultaneous oscillation of the shock front position, which varies with y and t, and of the density along the front. A strong in-plane electric field marks the front of the shock in Fig. <ref>(b) where the ion density gradient is largest. According to Fig. <ref>(c), the magnetic field is amplified by the shock crossing to more than twice its upstream value. The contribution of B_x to the magnetic pressure shown in Fig. <ref>(d) shows oscillations, which extend far into the upstream and downstream regions. The first and strongest maximum at early times is associated with the forced deformation of the shock as it propagated across the perturbation layer. After the shock left the perturbation layer, the shock performed free oscillations. They are coherent along the normal direction of the shock. Typical amplitudes of B_x are weak compared to B_y and we can thus interpret them as perturbations of an almost uniform guiding magnetic field that is aligned with y.
The supplementary movie 5 animates the ion phase space density distribution in time for the interval x>0 and Fig. <ref> shows its final frame.
The shock reforms at the time t≈ 1500 and it remains stable after this time. It remains stationary in its comoving frame, which is typical for subcritical shocks. Some ions behind the shock are accelerated to high energies forming phase space vortices with an axis that is approximately aligned with the y-axis. One such vortex survives at x≈ 36 in Fig. <ref>. A phase space vortex is sustained by the ambipolar electric field, which is associated with localized ion density depletion. The shock is located at x≈ 53 at this time and it accelerates a small fraction of the upstream ions to a speed ≈ 3v_fms. The density of the shock-reflected ion beam is affected by the shock's density oscillation as can be seen from several patches with a reduced density. Since the upstream plasma outside of the perturbation layer has a uniform density, oscillations in the density of the shock-reflected ion beam must lead to a change in the number of ions that cross the shock and enter its downstream region.
§ DISCUSSION
We studied with PIC simulations in one and two spatial dimensions the stability of shocks in collisionless plasma. A pair of shocks was driven by the thermal pressure jump between a thin, dense, and planar plasma cloud in the center of the simulation box, and a surrounding dilute ambient plasma. One shock propagated through a spatially uniform ambient plasma. The other shock traversed a perturbation layer, in which we varied the number density of mobile ions after it fully formed. We explored the reaction of subcritical fast magnetosonic (FMS) shocks to the perturbation.
In the 1D simulations, a lower density of mobile ions upstream of the shock increased the speed of its downstream plasma and reduced the shock speed in the downstream frame of reference. For our initial conditions, the net effect was that the shock was slowed down in intervals with a lower number density of mobile ions. It regained its initial speed once it left the perturbation layer. The spatial lag between the fastest and the slowest shock was about 2 electron skin depths, which was comparable to the wavelength of the wave that mediated the shock.
A variation in the density of the mobile ions in the perturbation layer along the shock front in the 2D simulations led to a spatial displacement of the shock front that was only a small fraction of the one we observed in the 1D PIC simulations. The amplitude of the spatial displacement was also proportional to the wavelength of the modulation; different parts of the shock front could thus not move independently. The crossing of the perturbation layer led to a modulation of the ion density of the shock front and to an expansion of the shocked magnetic field upstream. This expansion was more pronounced in regions with a low number density of mobile ions. Once a shock left the perturbation layer and entered the uniform plasma, the distributions of the ion density and position of the shock front, and the direction of the magnetic field changed.
We explored effects caused by the size of the simulation box along the shock front on the shock's evolution. The smallest box resolved a width of 12 electron skin depths, which is 6 times the wavelength of the wave that sustained the shock. The intermediate one doubled that width and the largest box tripled it. In the smallest simulation box, the shock perturbation damped out on a time scale less than an oscillation period of the mode that sustained the shock. In the intermediate simulation box, we observed a damped oscillation. Even weaker damping was observed in the largest box.
The frequency, with which the shock perturbation oscillated, was a large fraction of the lower-hybrid frequency. This frequency matches that of the FMS waves that mediated the shock in a simulation with similar initial conditions <cit.>. The ion density oscillations along the shock boundary and orthogonal to it constituted a standing wave, which is composed of modes that propagate in opposite directions and have a wavevector that is oblique to the shock normal. The limited thickness of the shock transition layer implies that the shock oscillation is not monochromatic. Fast magnetosonic waves close to their resonance frequency are known as lower-hybrid waves and they are dispersive. This means that their phase velocity changes with the wavevector. The damping of the shock may thus be caused by different frequencies of the lower-hybrid modes that constitute the shock. The larger the simulation box along the shock boundary, the better the shock is resolved in wavenumber space and the lower the frequency mismatch of the shock modes becomes. This may explain why the damping of the shock perturbation became weaker with the increasing size of the simulation box.
All simulations have shown that the shocks remain stable during their traversal of the perturbation layer and in the uniform plasma. The boundary oscillations are thus damped. We could, however, not track the shock long enough in the largest 2D simulation to determine its damping rate and, more specifically, if the amplitude decreases in time t with t^-3/2 as derived for some (magneto)hydrodynamic shocks. The boundary oscillations resulted in perturbations of the background magnetic field orthogonal to its direction. The background magnetic field was deformed in an interval that extended several electron skin depths upstream and downstream of the density overshoot of the shock in Figs. <ref>(c, f) and in Fig. <ref>(d). Such perturbations can trigger the growth of magnetowaves propagating in the Alfvén mode branch or in its high-frequency extension known as the Whistler wave branch <cit.>, which extends up to wave propagation angles that are almost perpendicular to the background magnetic field <cit.>. The damping of the shock boundary oscillations in the largest 2D simulation may be weak enough to lead to the growth of waves, which propagate along the background magnetic field. The box size and the simulation time we could resolve with our simulation were however not sufficiently large to resolve the Alfvén waves, which were observed in hybrid simulations <cit.>, in PIC simulations with different initial and boundary conditions <cit.>, and by satellites <cit.>.
We could also not determine unambiguously the mechanism, that lets the shock oscillation saturate in the 2D perturbation layer. The observation that lower-hybrid waves can only mediate a quasi-perpendicular shock and that the perturbation oscillates at the lower-hybrid frequency suggests that the amplitude of the shock boundary oscillation is limited by the stability properties of lower-hybrid waves. Another saturation mechanism could be magnetic tension, which was small compared to the thermal pressure gradient force but comparable in magnitude to the magnetic pressure gradient force. We leave these studies to future work.
It would be interesting to investigate shock oscillations in the laboratory. A wide range of shock studies in collisionless plasma exist but to the best of our knowledge, none has examined oscillations of the shock boundary. One caveat to such studies is that the wavelength of these oscillations is long and their amplitude small. We selected the magnetic field amplitude 0.85 T, the electron temperature 1000 keV and density 10^15cm^-3 and fully ionized nitrogen ions, because these values are realistic for experiments, in which a laser-generated blast shell expands into an ambient plasma. An example is the shock formation study in Ref. <cit.>. However, a wavelength of the perturbation in simulation 3 would amount to 5 mm, which exceeds by far the spatial scales that was resolved by that study.
§ ACKNOWLEDGEMENTS
The simulations were performed on resources provided by the Swedish National Infrastructure for Computing (SNIC) at the NSC and on the centers of the Grand Equipement National de Calcul Intensif (GENCI) under grant number A0090406960. MED acknowledges financial support from a visiting fellowship of the Centre de Recherche Astrophysique de Lyon. AB and FCC acknowledge support by grant PID2021-125550OB-I00 from the Spanish Ministerio de Economía y Competitividad.
§ DATA AVAILABILITY STATEMENT
All data that support the findings of this study are included within the article (and any supplementary files).
§ CONFLICT OF INTEREST
The authors declare that they have no conflict of interest.
§ REFERENCES
100
Romagnani2008 Romagnani L, Bulanov S V, Borghesi M, Audebert P, Gauthier J C, Loewenbrueck K, Mackinnon A J, Patel P, Pretzler G, Toncian T and Willi O 2008 Observation of collisionless shocks in laser-plasma experiments Phys. Rev. Lett. 101 025004
Kuramitsu2011 Kuramitsu Y, Sakawa Y, Morita T, Gregory C D, Waugh J N, Dono S, Aoki H, Tanji H, Koenig M, Woolsey N and Takabe H 2011 Phys. Rev. Lett. 106 175002
Ahmed2013 Ahmed H, Dieckmann M E, Romagnani L, Doria D, Sarri G, Cerchez M, Ianni E, Kourakis I, Giesecke A L, Notley M, Prasad R, Quinn K, Willi O and Borghesi M 2013 Time-Resolved Characterization of the Formation of a Collisionless Shock Phys. Rev. Lett. 110 205001
Schaeffer2017 Schaeffer D B, Fox W, Habersberger D, Fiksel G, Bhattacharjee A, Barnak D H, Hu S X and Germaschewski K 2017 Generation and Evolution of High-Mach-Number Laser-Driven Magnetized Collisionless Shocks in the Laboratory Phys. Rev. Lett. 119 025001
Fazzini2022 Fazzini A, Yao, W, Burdonov K, Béard J, Chen S N, Ciardi A, d'Humières E, Diab R, Filippov E D, Kisyov S, Lelasseux V, Miceli M, Moreno Q, Orlando S, Pikuz S, Ribeyre X, Starodubtsev M, Zemskov R, and Fuchs J 2022 Particle energization in colliding subcritical collisionless shocks
investigated in the laboratory Astron. Astrophys 665 A87
Johlander2016 Johlander A, Schwartz S J, Vaivads A, Khotyaintsev Yu V, Gingell I, Peng I B, Markidis S, Lindqvist P-A, Ergun R E, Marklund G T, Plaschke F, Magnes W, Strangeway R J, Russel C T, Wei H, Torbert R B, Paterson W R, Gershman D J, Dorelli J C, Avanov L A, Lavraud B, Saito Y, Giles B L, Pollock C J and Burch J L 2016 Rippled Quasiperpendicular Shock Observed by the Magnetospheric Multiscale Spacecraft Phys. Rev. Lett. 117 165101
Winske1988 Winske D and Quest K B 1988 Magnetic field and density fluctuations at perpendicular supercritical collisionless shocks J. Geophys. Res. 93 9681-9693
Lembege1992 Lembege B and Savoini P 1992 Nonstationarity of a two‐dimensional quasiperpendicular
supercritical collisionless shock by self‐reformation Phys Fluids B 4 3533-3548
Lowe2003 Lowe R E and Burgess D 2003 The properties and causes of rippling in quasi-perpendicular collisionless shock front Ann. Geophys. 21 671
Chapman2005 Chapman S C, Lee R E and Dendy R O 2005 Perpendicular shock reformation and ion acceleration Space Sci. Rev. 121 5-19
Burgess2007 Burgess D and Scholer M 2007 Shock front instability associated with reflected ions at the perpendicular shock Phys. Plasmas 14 012108
Yang2012Yang Z W, Lembege B and Lu Q M 2012 Impact of the rippling of a perpendicular shock front on ion dynamics J Geophys Res 117 A07222
Clark2014 Clark S E, Everson E T, Schaeffer D B, Bondarenko A S, Constantin C G, Niemann C and Winske D 2014 Enhanced collisionless shock formation in a magnetized plasma containing a density gradient Phys. Rev. E 90 041101
Dieckmann2014 Dieckmann M E, Sarri G, Doria D, Ahmed H and Borghesi M 2014 Evolution of slow electrostatic shock into a plasma shock mediated by electrostatic turbulence New J. Phys. 16 073001
Burgess2016 Burgess D, Hellinger P, Gingell I and Tavnicek P M 2016 Microstructure in two- and three-dimensional hybrid simulations of perpendicular collisionless shocks J Plasma Phys 82 905820401
Marcowith2016 Marcowith A, Bret A, Bykov A, Dieckman M E, Drury L O, Lembege B, Lemoine M, Morlino G, Murphy G, Pelletier G, Plotnikov I, Reville B, Riquelme M, Sironi L and Novo A S 2016 The microphysics of collisionless shock waves Rep. Prog. Phys. 79 046901
Umeda2017 Umeda T and Daicho Y 2017 Periodic self-reformation of rippled perpendicular collisionless shocks in two dimensions Ann Geophysicae 36 1047-1055
Dieckmann2017 Dieckmann M E, Folini D, Walder R, Romagnani L, d'Humieres E, Bret A, Karlsson T and Ynnerman A 2017 Emergence of MHD structures in a collisionless PIC simulation plasma Phys. Plasmas 24 094502
Gueroult2017 Gueroult R, Ohsawa Y and Fisch N J 2017 Role of Magnetosonic Solitons in Perpendicular Collisionless Shock Reformation Phys. Rev. Lett. 118 125101
Dieckmann2018a Dieckmann M E, Moreno Q, Doria D, Romagnani L, Sarri G, Folini D, Walder R, Bret A, d'Humieres E and Borghesi M 2018 Expansion of a radially symmetric blast shell into a uniformly magnetized plasma Phys. Plasmas 25 052108
Kobzar2021 Kobzar O, Niemiec J, Amano T, Hoshino M, Matsukity S, Matsumoto Y and Pohl M 2021 Electron Acceleration at Rippled Low-mach-number Shocks in High-beta Collisionless Cosmic Plasmas Astrophys J 919 97
Marshall1955 Marshall W 1955 The structure of magneto-hydrodynamic shock waves Proc. R. Soc. A 233 367
Edmiston1984 Edmiston J P and Kennel C F 1984 A parametric survey of the 1st critical Mach number for a fast MHD shock J. Plasma Phys. 32 429-441
Gedalin2023 Gedalin M, Dimmock A P, Russel C T, Pogorelov N V and Roytershteyn V 2023 Role of the overshoot in the shock self-organization J Plasma Phys 89 905890201
Cramer1995 Cramer N F 1995 The theory of Alfvén surface waves Phys. Scr. 1995 185
Joarder2006 Joarder P S and Nakariakov V M 2006 Hydromagnetic surface waves on a tangential discontinuity Geophys. Astrophys. Fluid Dyn. 100 59-83
Lysak2008 Lysak R L 2008 On the dispersion relation for the kinetic Alfvén wave in an inhomogeneous plasma Phys. Plasmas 15 062901
Forslund1970 Forslund D W, Morse R L and Nielson C W 1970 Electron Cyclotron Drift Instability Phys. Rev. Lett. 25 1266-1270
Davidson1977 Davidson R C, Gladd N T, Wu C S and Huba J D 1977 Effects of finite plasma beta on the lower‐hybrid‐drift instability Phys. Plasmas 20 301
Daughton2003 Daughton W 2003 Electromagnetic properties of the lower-hybrid drift instability in a thin current sheet Phys. Plasmas 10 3103-3119
McClements1997 McClements K G, Dendy R O, Bingham R, Kirk J G and Drury L O 1997 Acceleration of cosmic ray electrons by ion-excited waves at quasi-perpendicular shocks Mon. Not. R. Astron. Soc. 291 241-249
Gekelman2011 Gekelman W, Vincena S, Van Compernolle B, Morales G J, Maggs J E, Pribyl P and Carter T A 2011 The many faces of shear Alfvén waves Phys. Plasmas 18 055501
Roberts1945 Roberts A E 1945 See National Technical Information Service Document PB2004-100597 [A. E. Roberts, Los Alamos Scientific Laboratory Report No. LA-299 1945 (unpublished)]. Copies may be ordered from National Technical Information Service, Springfield, VA 22161.
Freeman1955 Freeman N C 1955 A theory of the stability of plane shock waves Proc. R. Soc. Lond. A. Math. Phys. Sci. 228 341
Zaidel1960 Zaidel P M 1960 Shock wave from a slightly curved piston J. Appl. Math. Mech. 24 316
DYakov1954 D’yakov S P 1954 Shock wave stability Zh. Eksp. Teor. Fiz. 27 288
Kontorovich1957 Kontorovich V M 1957 On the shock waves stability Zh. Eksp. Teor. Fiz. 33 1525
Wetta2018 Wetta N, Pain J.-C, and Heuzé O 2018 D’yakov-Kontorovitch instability of shock waves in hot plasmas Phys. Rev. E 98, 033205
Huete2020 Huete C, Cobos-Campos F, Abdikamalov E, and Bouquet S 2020 Acoustic stability of nonadiabatic high-energy-density shocks Phys. Rev. Fluids 5 113403
Esirkepov2001 Esirkepov T Z 2001 Exact charge conservation scheme for Particle-in-Cell simulation with an arbitrary form-factor Comput. Phys. Commun. 135 144-153
Arber2015 Arber T D, Bennet K, Brady C S, Lawrence-Douglas A, Ramsay M G, Sircombe N J, Gillies P, Evans R G, Schmitz H, Bell A R and Ridgers C P 2015 Contemporary particle-in-cell approach to laser-plasma modelling Plasma Phys. Control. Fusion 57 113001
Gardner1964 Gardner C, and Krusal M 1964 Stability of plane magnetohydrodynamic shocks Phys. Fluids 7 700
Graham2019 Graham D B, Khotyaintsev Yu V, Norgren A, Vaivads A, André M, Drake J F, Egedal J, Zhou M, Le Contel O, Webster J M, Lavraud B, Kacem V, Génot V, Jacquey C, Rager A C, Gershman D J, Burch J L, and Ergun R E 2019 Universality of Lower Hybrid Waves at Earth's Magnetopause J. Geophys. Res. 124, 8727
Eliasson2006 Eliasson B and Shukla P K 2006 Formation and dynamics of coherent structures involving phase-space vortices in plasmas Phys. Rep. 422 225-290
Dieckmann2004 Dieckmann M E, Ynnerman A, Chapman S C, Rowlands G and Andersson N 2004 Simulating thermal noise Phys. Scripta 69 456-460
Verdon2009a Verdon A L, Cairns I H, Melrose D B and Robinson P A 2009 Properties of lower hybrid waves IAU Symp. 257 569-573
Verdon2009b Verdon A L, Cairns I H, Melrose D B and Robinson P A 2009 Warm electromagnetic lower hybrid wave dispersion relation Phys. Plasmas 16 052105
Artemyev2016 Artemyev A, Agapitov O, Mourenas D, Krasnoselskikh V, Shastun V and Mozer F 2016 Oblique Whistler-Mode Waves in the Earth’s Inner Magnetosphere: Energy Distribution, Origins, and Role in Radiation Belt Dynamics Space Sci Rev 200 261-355
|
http://arxiv.org/abs/2307.06233v1 | 20230712152604 | On the Importance of Denoising when Learning to Compress Images | [
"Benoit Brummer",
"Christophe De Vleeschouwer"
] | eess.IV | [
"eess.IV",
"cs.CV",
"68T07 (Primary), 68P30 (Secondary)",
"I.4.2; I.4.4"
] |
On the Importance of Denoising when Learning to Compress Images
Benoit Brummer
intoPIX, University of Louvain
Mont-Saint-Guibert, Belgium
[email protected]
Christophe De Vleeschouwer
University of Louvain
Louvain-la-Neuve, Belgium
[email protected]
August 12, 2023
=======================================================================================================================================================================================================================================
empty
Image noise is ubiquitous in photography. However, image noise is not compressible nor desirable, thus attempting to convey the noise in compressed image bitstreams yields sub-par results in both rate and distortion. We propose to explicitly learn the image denoising task when training a codec. Therefore, we leverage the Natural Image Noise Dataset, which offers a wide variety of scenes captured with various ISO numbers, leading to different noise levels, including insignificant ones. Given this training set, we supervise the codec with noisy-clean image pairs, and show that a single model trained based on a mixture of images with variable noise levels appears to yield best-in-class results with both noisy and clean images, achieving better rate-distortion than a compression-only model or even than a pair of denoising-then-compression models with almost one order of magnitude fewer GMac operations.
§ INTRODUCTION
Image sensors capture noise along with useful image information. This noise increases with the camera's ISO sensitivity setting, but noise is virtually always present to some extent and it is both incompressible and undesirable. Lossy image compressors inherently perform some image denoising because removing random noise is often the most effective way to reduce entropy in a signal, but without proper training (or algorithm design) the resulting image size is still inflated and the results look sub-par, as shown in <ref> (and also attested by Figure S1 in Supplementary Material, and by our experiments in <ref>). This increase in bitrate is readily observable in both conventional and learned codecs. A learned lossy image compression scheme is trained by forwarding the image through an autoencoder (AE) and backpropagating from the loss, whose components are the bitrate and the distortion <cit.>. The bitrate is computed from an optimized, i.e. trained, cumulative distribution function, and the distortion quantifies the difference between the output of the autoencoder and the input image, typically by computing the mean square error (MSE).
Any image compression scheme can attain better rate-distortion by having the noise removed first.
An image denoiser can typically be trained to reconstruct clean images from noisy inputs, using a dataset of paired images where static scenes are captured using a progressively faster shutter speed <cit.>.
In this work, we consider joint compression and denoising. Adding a denoising functionality essentially comes down to feeding the network with a potentially noisy image and comparing its output with a clean image that may have a better quality than what was initially input to the network. The goal is to generate an image that is of higher quality than the one used as input, while decreasing the necessary bitrate close to that of a clean image. Meanwhile, there is no added complexity because the inference process and the network architecture remain unchanged. The network is trained with both noisy images and some clean images as input such that it removes noise while retaining the ability to compress clean input images efficiently.
Our experiments analyzes the impact of image noise on the rate-distortion of different standard and learned compression methods, the benefit of performing denoising prior to compression, and denoising while compressing. Our original supervision strategy, introduced to promote the reconstruction of clean images when the learned codec is fed with noisy ones, appears to be effective.
The resulting joint denoising and compression models perform properly on clean images as well as noisy ones, effectively replacing a standard compression models for general-purpose image compression and substantially improving the rate-distortion on noisy images. As illustrated in the second line of <ref> (comparison between second and third columns), it is shown to significantly improve rate-distortion performance (using non-noisy images as ground-truth) compared to relying on the implicit noise removal induced by the conventional adoption of a perception-based loss function during training. It also reaches slightly better rate-distortion than a computationally heavy two-step procedure, involving one AE for denoising, followed by another AE-based network for compression.
This paper is organized as follows: [sec:background]Section 2 summarizes the work on which this paper builds. The main concepts behind our joint denoising and compression supervision are introduced in [sec:dc]Section 3. The implementation details are given in [ssec:methods]Section 4 followed by the results, and [sec:conclusion]Section 5 summarizes the impact of the present work.
§ BACKGROUND
Learned Lossy Image Compression is typically based on the seminal work of Johannes Ballé et al. <cit.>; a convolutional autoencoder <cit.> with generalized divisive normalization (GDN) <cit.>, and an entropy model which is jointly optimized to capture the latent distribution. This model has been extended with a
parametrized hyperprior <cit.> or with a competition between multiple priors <cit.>, which allows for manipulating an image-dependent latent distribution. Our experiments build onto the initial architecture from <cit.> completed with multiple sets of latent distributions learned in <cit.>.
Image Noise occurs as the sensitivity of an image sensor is increased to make up for non-ideal lighting conditions. When insufficient light is provided or the dynamic range is too wide, the ISO and/or shutter speed settings are increased accordingly. Pixels take on random, less accurate values, and fewer detail is visible as a result. Different image restoration techniques have been developed to tackle image denoising, including Wavelet <cit.> and non-local means based methods <cit.>, BM3D <cit.>, and recent deep-learning based methods <cit.>. Image noise does not reflect the physical components of the observed scene. It is a consequence of the imperfect acquisition process, and thereby should be ignored when possible. Hence, targeting the reconstruction of a denoised image is the proper way to proceed to get a faithful representation of reality (even if it implies to not perfectly render the captured signal).
The Natural Image Noise Dataset (NIND) <cit.> and Smartphone Image Denoising Dataset (SIDD) <cit.> provide sets of clean–noisy image pairs which are appropriate to train a denoising neural network. NIND is made of multiple pictures of many static scenes captured on a tripod to ensure spatial consistency; the clean ground-truth images are taken in ideal conditions with a camera's base ISO sensitivity to capture as much light as is necessary and to obtain the best possible quality, and matching versions of the same scene are captured with a variety of increasing shutter speed and ISO settings, which result in increased noise and lower image quality. These noisy images are typically fed as the input of the denoising neural network while training it to reconstruct the scene as if it were taken in ideal conditions.
Denoising and Compression have been studied as a combined problem in the wavelet domain <cit.>,
and more recently the idea of learning denoising and compression jointly with neural networks was approached by Testolina et al. <cit.> in the context of the JPEG AI codec. The decoder in <cit.> is extended such that it consists of twice as many layers, and Poissonian-Gaussian noise is applied to the input training data. This approach delegates the denoising task to the decoder, in line with the JPEG AI requirements of using a universal encoder and specialized decoders. However, as shown in our experimental section, this architecture results in no appreciable bitrate reduction because it is only the encoder that can ensure that incompressible noise does not reach the bitstream. Moreover, training a denoiser with synthetic noise tends to produce a poor model on real data <cit.>. Testolina et al. introduce a promising joint denoising and compression (JDC) scheme but the resulting rate-distortion of their model falls short of that obtained using our proposed supervision based on pairwise naturally noisy / clean images.
§ JOINTLY LEARNED DENOISING AND COMPRESSION
An effective denoising network can be trained to reconstruct clean images given noisy images as input, using a dataset of paired noisy–clean images such as NIND <cit.> and SIDD <cit.>. We propose to adopt a similar principle to train an autoencoder originally designed for image compression.
A joint denoising and compression autoencoder <cit.> is trained to generate a clean image from either a matching noisy image in a paired dataset or from the same clean image. The aim is to obtain a decoded image whose quality is potentially higher than that of the input image, while saving the space that would otherwise be wasted in encoding noise. Different methods are proposed to train such a joint denoising and compression model using Natural Image Noise Removal (NINR) supervision. They are described in Section <ref> and <ref> illustrates the general training process.
§.§ Our Proposed NIN Supervision Strategies
Four different strategies are envisioned and compared to implement our novel Natural Image Noise Removal (NINR) supervision paradigm. They are listed in <ref> and described below.
Noisy Pairs (JDC-N) The simplest joint denoising and compression implementation consists of training with all noisy–clean image pairs available in the dataset(s).
Clean and Noisy Pairs (JDC-CN) This method considers some clean–clean image pairs, in addition to the noisy–clean image pairs, to ensure that the network's performance does not degrade when the input images contain no unwanted noise. The dataset of clean images can be selected as a set of images which has been assessed and promoted by human reviewers, such as the Wikimedia Commons Featured Pictures <cit.>, then further refined by eliminating images whose metadata indicates a high ISO value in order to ensure the absence of noise.
Clean and Low-noise Pairs (JDC-Cn) To specialize the model to the most frequent input image noise levels, we have also considered placing a threshold on the training data noise level. Our experiments reveal that it is beneficial to filter out the most noisy input training images because the overall rate-distortion degrades when the network is trained to perform more extreme denoising. Such extreme denoising would require added complexity on the encoder and, although possible, extreme denoising is outside the scope of a combined denoiser whose aim is to improve rate-distortion by removing noise that is inherently present in most photographs rather than learning to see in the dark, as proposed in <cit.>. The paired image noise dataset is analyzed prior to training such that the multi-scale structural similarity (MS-SSIM) <cit.> score between each noisy crop and its clean ground-truth is stored in a file, and the training dataset can be initialized such that all training crops exceed a set quality threshold. The effect of different noise thresholds is analyzed in the [sssec:ablation]ablation study.
Building Pairs from a Universal Denoiser (JDC-UD) A fourth training method consists of running a pre-trained blind denoising model <cit.> on all the training data to generate the ground-truth images, and computing the training loss between the input images and the denoised images. This method effectively performs knowledge distillation <cit.> from a powerful universal denoising network to the joint denoising and compression network. All input images are considered noisy and the training dataset is virtually limitless because the ground-truth images are generated (in advance), thus entire image datasets are used without filtering.
§ EXPERIMENTS
§.§ Practical Implementation Details
These experiments are based on the PyTorch implementation of the autoencoder base codec introduced in <cit.>. Source code is provided as Supplementary Material and available on <https://github.com/trougnouf/compression>. The training loss of the compression autoencoder is computed as Loss=bitrate(x̂)+λ×MSE(x̂, x), where x̂ is the decoded image and x is the clean ground-truth which, as explained in Section 3, may differ from the input image, and λ balances the rate/distortion trade-off of the model. The combined denoising and compression autoencoder <cit.> is trained with batches of four noisy images from NIND <cit.> and one clean image from the Wikimedia Commons Featured Pictures <cit.> dataset whose ISO value does not exceed 200, with a crop size of 256 as is typically done in the learned compression literature <cit.>. The pre-trained “universal denoiser” used to train the JDC-UD model is the U-Net-like blind denoising model published with NIND, which was updated such that its training includes clean–clean image pairs. The CLIC professional test set <cit.> is used to assess the models on other clean images.
The JDC model defined in Testolina et al. (2021) <cit.> is trained entirely as described with Poissonian-Gaussian artificial noise <cit.> (with noise parameters a=0.2^2, b=0.04^2), the encoder from Balle et al. (2018) <cit.>, and their proposed decoder which has twice as many layers as the one recommended by Ballé et al. An additional JDC-Cn model is trained with the larger decoder proposed by Testolina et al. in order to assess their proposed network architecture separately from the training method.
Most models are trained for six million steps with λ=4096 to yield the highest bitrate, then the λ value is halved and training continues for three million steps for each λ value all the way down to λ=256, like in <cit.>. Both the JDC-Cn.8-Tdec method that is matched with the decoder defined by Testolina et al. and the JDC-N model trained with only noisy–clean image pairs have had an additional model trained with λ=8192 in order to reach the other methods' highest bitrate. Likewise, the whole method defined by Testolina et al. <cit.> was trained with up to λ=16384.
The standard codecs comparisons are made by encoding images using GraphicsMagick 1.3.37 (JPEG), the JPEG XL encoder v0.6.1, and the BPG Image Encoder version 0.9.8. The “standard autoencoder” is that defined <cit.>.
§.§ Results
§.§.§ On the Importance of Denoising.
The first experiment measures the impact of denoising prior to compression with different compression methods. <ref> plots the rate-distortion curves obtained without specific denoising of the input images, for a variety of codecs. We observe that for all codecs compression is an effective denoising method at the lowest bitrate. However, at reasonable bitrates, all conventional codecs (learned or not) tend to reproduce the noise. This is in contrast with our proposed joint denoising and compression paradigm, which continuously increases quality when the bitrate increases.
Denoising before compression might be considered to solve the quality issue when using conventional codecs. As shown in <ref>, this bridges (most of) the quality gap compared to our proposed JDC method, but at the cost of a significantly increased complexity (see Section <ref>).
§.§.§ Our Joint Denoising and Compression Scheme
<ref> also introduces the proposed joint denoising and compression models, JDC-CN (no noise threshold) and JDC-Cn.8 (training noise limited to MS-SSIM≥ 0.8). These models are trained like the dedicated denoising model in that the input training batch is made of four noisy images and one clean image, and the model is tasked with reconstructing the clean ground-truths. This single JDC model generally achieves better rate-distortion than a duo of denoising then compression neural networks (except at high bitrate), while using significantly less computational overhead.
The best results shown are obtained with the “JDC-Cn.8” model which is trained with paired images whose input noise is limited to MS-SSIM≥ 0.8 as well as with unpaired clean images to promote generalization. T is the method trained with artificial noise described by Testolina et al. <cit.>, which performs worse than a duo of models as is also shown in their results.
§.§.§ Computational Complexity
Computational cost is measured in terms of billion multiply-accumulate operations (GMac) and runtime on an AMD Threadripper 3960X CPU. The dedicated denoising U-Net performs 812 billion multiply-accumulate operations (GMac) per megapixel (MP) in 65.8 sec./MP, whereas the JDC model's compression encoder performs 92.8 GMAC/MP <cit.> in 2.9 sec./MP. The dual model approach (denoising then compression) thus performs a total of 904.8 GMac/MP, whereas a single joint denoising and compression model operates with 10.3% of that computational complexity.
§.§.§ Handling Clean Images
JDC-C models are trained with both noisy–clean and clean-clean paired images in order to better generalize and maintain good rate-distortion on clean images. <ref> and <ref> show the behavior of different JDC training strategies when compressing clean images. JDC models trained with some clean input images or with the JDC-UD knowledge distillation technique yield a rate-distortion similar to the model trained for compression only, even when no minimum training noise threshold is set, thus incorporating clean images in the training data (JDC-CN) reinstates a good rate-distortion. Limiting the input noise to MS-SSIM≥ 0.8 (JDC-Cn.8) further improves rate-distortion such that it is slightly better with a JDC model than a standard model at low bitrates, and slightly worse at high bitrates due to the perception-distortion tradeoff <cit.> where only reconstruction fidelity matters. The JDC-N model trained with only noisy–clean image pairs performs significantly worse on clean images, and the model trained with artificial noise (“T”) performs worst.
<ref> shows a common use-case where the amount of noise is low (MS-SSIM∈[0.95, 1)). Prior denoising still improves rate-distortion, and joint denoising and compression methods yield the most significant rate-distortion benefits. All compression methods benefit from prior or joint denoising even when the level of noise is minor. Traditional compression schemes benefit the most from prior denoising, and joint denoising outperforms prior denoising in learned methods.
§.§.§ Ablation Study
The effect of different training noise thresholds is analyzed when compressing noisy images. In Figure <ref> the test noise is limited to MS-SSIM∈[0.7, 1) which is qualitatively fairly noisy, as shown in <ref>. None of the methods perform significantly better or worse than the denoise and compress duo of models. It is worth noting that the three worst JDC training schemes are the knowledge distillation JDC-UD model, the model trained with a quality threshold of MS-SSIM≥ 0.9, and the decoder defined by Testolina et al. which contains twice as many layers. The JDC-Cn models trained with an MS-SSIM threshold of 0.6 and 0.8 yield the best rate-distortion. A visualization of the different denoising and compression methods at low bitrates is shown as <ref>.
In <ref>, the testing noise is increased to MS-SSIM∈[0.5, 1), showing how the models behave under extreme input noise. The results are largely the same; the model trained with MS-SSIM≥ 0.9 struggles even more due to the increased noise and its performance is close to that of the JDC-UD method, the model trained with MS-SSIM≥ 0.8 does not perform as well whereas the model trained with MS-SSIM≥ 0.6 is still competitive, and it remains beneficial to train with clean image pairs as well.
§ CONCLUSION
Denoising images improves rate-distortion whenever there is noise present, regardless of the compression method. Denoising can be performed prior to compression using a dedicated denoiser (as is typically done in professional image development workflow) with no adaptation to the compression scheme. A joint model that is trained to perform denoising and compression simultaneously yields further improvements in rate-distortion.
As a result, a joint denoising and compression model performs 8.9 times fewer GMAC operations than a U-Net denoiser followed by a compression encoder. Since the JDC model only differs from standard learned compression models by the adopted supervision strategy, it can be implemented using any of the compression architectures available in the literature (such as <cit.>). Our proposed Natural Image Noise Removal supervision strategy thus provides a fundamental and generic contribution that is expected to become popular in future works related to learned compression.
In practice, joint denoising and compression models may be trained using a dataset of noisy–clean image pairs with natural noise, such as NIND <cit.> and SIDD <cit.>. Performance is improved by setting a quality threshold on the training images, such as MS-SSIM≥ 0.8 or MS-SSIM≥ 0.6 depending on the maximum expected noise. The rate-distortion curve is preserved on clean images and improved in any case
by incorporating clean-clean image pairs in the training data.
An alternative method consists of performing knowledge distillation <cit.> by using the output of a dedicated denoiser as ground-truth images during training. This has the benefit of allowing a virtually limitless training dataset because a paired dataset is no longer required, but it requires pre-processing of the training images and results in a slightly worse rate-distortion.
§ ACKNOWLEDGEMENTS
This research has been funded by the Walloon Region. Computational resources have been provided by the supercomputing facilities of the Université catholique de Louvain (CISM/UCL) and the Consortium des Équipements de Calcul Intensif en Fédération Wallonie Bruxelles (CÉCI) funded by the Fond de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under convention 2.5020.11 and by the Walloon Region.
§ ANNEX
ieee_fullname
|
http://arxiv.org/abs/2307.04171v1 | 20230709133401 | Effect of coat-protein concentration on the self-assembly of bacteriophage MS2 capsids around RNA | [
"LaNell A. Williams",
"Andreas Neophytou",
"Rees F. Garmann",
"Dwaipayan Chakrabarti",
"Vinothan N. Manoharan"
] | physics.bio-ph | [
"physics.bio-ph"
] |
fancy
plain
plain
iblabel[1]#1
akefntext[1]
[0pt][r]thefnmark #1
1.125
*
§
0pt4pt4pt
*
§.§
0pt15pt1pt
[L]
August 12, 2023
[R]1–LastPage
[
onecolumn, centering, margin=4cm
Effect of coat-protein concentration on the
self-assembly of bacteriophage MS2 capsids around RNA
LaNell A. Williams,^a
Andreas Neophytou,^b
Rees F. Garmann,^c,d,e
Dwaipayan Chakrabarti,^b
Vinothan N. Manoharan,^∗^c,a
Self-assembly is a vital part of the life cycle of certain
icosahedral RNA viruses. Furthermore, the assembly process can be
harnessed to make icosahedral virus-like particles (VLPs) from coat
protein and RNA in vitro. Although much previous work has
explored the effects of RNA-protein interactions on the assembly
products, relatively little research has explored the effects of
coat-protein concentration. We mix coat protein and RNA from
bacteriophage MS2, and we use a combination of gel electrophoresis,
dynamic light scattering, and transmission electron microscopy to
investigate the assembly products. We show that with increasing
coat-protein concentration, the products transition from well-formed MS2
VLPs to “monster” structures consisting of multiple partial capsids to
RNA-protein condensates consisting of large networks of RNA and protein.
We argue that the variation in structure arises because the assembly
follows a nucleation-and-growth pathway in which the nucleation rate
depends sensitively on the coat-protein concentration. At high
coat-protein concentration, multiple nuclei can form on each RNA strand,
leading to malformed structures. Monte Carlo simulations with
coarse-grained models of capsomers and RNA validate this physical
picture. Our results provide insight into an important biophysical
process and could inform design rules for making VLPs for various
applications.
]
§
^a Department of Physics, Harvard University, Cambridge, MA
02138, USA.
^b School of Chemistry, University of
Birmingham, Edgbaston, Birmingham B15 2TT, UK.
^c Harvard John A. Paulson School of Engineering and Applied
Sciences, Harvard University, Cambridge, MA 02138, USA.
^d Department of Chemistry and Biochemistry,
San Diego State University, San Diego, CA 92182, USA
^e Viral Information Institute, San Diego
State University, San Diego, CA 92182 USA
^∗ [email protected]
§ INTRODUCTION
For positive-strand RNA viruses to replicate, coat proteins must
assemble around the viral RNA to form new virus
particles. <cit.> Certain features of this assembly
process can be replicated in vitro, in the absence of host-cell
factors. <cit.> For example,
virus-like particles (VLPs) can be assembled from solutions of the coat
protein and RNA of bacteriophage MS2. Wild-type MS2 particles have an
icosahedral capsid (triangulation number T=3, diameter about 30 nm)
containing one maturation protein and 178 coat proteins surrounding an
RNA strand with approximately 3600 nucleotides. By contrast, MS2 VLPs
that assemble in vitro lack the maturation protein required for
infectivity. Nonetheless, they can adopt the same structure and size as
wild-type MS2 virus particles. <cit.> This
result supports the premise that RNA virus assembly is driven by
free-energy minimization.
However, the assembly process itself and the conditions under which it
leads to well-formed structures are not yet well understood. In MS2,
most previous work on this question has focused on the role of specific
interactions between coat protein and the viral RNA.
Studies <cit.> on R17, a virus closely
related to MS2, have shown that the the overall yield of assembled VLPs
decreases if the RNA does not contain a sequence called the
translational operator that has a strong and specific affinity for coat
protein. <cit.> Nonetheless, assembly proceeds in the
absence of the operator, perhaps due to non-specific interactions
between the coat protein with the RNA <cit.>.
Therefore, specific RNA-protein interactions might affect the assembly
rate and yield but do not seem to be essential to the assembly process.
While these studies have established the relevance of RNA-protein
interactions to the assembly process, they did not directly reveal the
assembly pathway itself. More recent work involving interferometric
scattering microscopy, a technique that can image individual VLPs as
they form, shows that MS2 VLPs assemble by a nucleation-and-growth
pathway <cit.> at near-neutral pH, salt
concentrations on the order of 100 mM, and micromolar coat-protein
concentrations. In this pathway, a critical nucleus of proteins must
form on the RNA before the capsid can grow to completion. The size of
the critical nucleus, estimated to be less than six coat-protein dimers,
is associated with a free-energy barrier. Taken together with the
previous experiments on the role of the RNA
sequence <cit.>, these results show that
MS2 assembly is a heterogeneous nucleation process, in which the
nucleation rate is likely controlled by two factors: RNA-protein
interactions and the coat-protein concentration.
Arguably, the coat-protein concentration has a larger role in
controlling the morphology of the VLPs than does the nature of the
RNA-protein interactions (at least in in vitro experiments, where
the coat-protein concentration is typically constant). The
interferometric scattering microscopy
experiments <cit.> showed that very few VLPs are
formed at low (1 μM) concentration of MS2 coat-protein dimers, while
well-formed capsids form at higher concentrations, and so-called
“monster” capsids, consisting of multiple partially formed capsids on
a single strand of RNA, form at an even higher concentrations
(several μM). These results suggest that the nucleation barrier,
which controls the nucleation rate, depends sensitively on the
coat-protein concentration. At low concentration, the nucleation rate is
too small for capsids to form within the experimental time frame; at
high concentration, the nucleation rate is so high that multiple nuclei
can form on a single RNA strand, resulting in monster capsids. However,
this study examined only a few protein concentrations, and they were
performed at low RNA concentration relative to protein.
Here we use bulk assembly experiments to determine the assembly products
of MS2 coat protein and MS2 RNA as a function of coat-protein
concentration. We characterize the assembly products using three
techniques: gel electrophoresis, dynamic light scattering (DLS), and
transmission electron microscopy (TEM). In comparison to the previous
study,<cit.> in which protein was in large
excess relative to RNA, our study examines a much wider range of coat-protein
concentrations, including ones near the stochiometric ratio of
coat protein to RNA. Furthermore, the three-pronged experimental
approach allows us to corroborate results and test hypotheses about how
the assembly products form. Gel electrophoresis and TEM provide
qualitative data that we use to determine the size and structure of the
assembly products, and DLS provides quantitative information about their
size distributions. With these methods, we show that as the coat-protein
concentration increases, the morphologies transition from well-formed
VLPs to monster capsids to RNA-protein condensates consisting of large
networks of RNA and protein. These results are summarized in
Fig. <ref> and discussed in more detail in
Section <ref>. With the insights provided by simulations of
coarse-grained models of capsomers and RNA, we explain these results in
terms of a nucleation-and-growth pathway for capsid assembly.
§ RESULTS
§.§ Overview of experimental approach
Briefly, our experimental procedure consists of combining 50 nM MS2 RNA
with purified MS2 coat-protein dimers at concentrations ranging from 2.5
to 30 μM (see Section <ref> for full details). For
reference, a full VLP has an icosahedral capsid with a triangulation
number of 3 (T=3), corresponding to 180 coat proteins or 90
coat-protein dimers. At 50 nM RNA concentration, a coat-protein dimer
concentration of 5 μM therefore corresponds approximately to the
stoichiometric ratio of coat proteins to RNA in a full VLP. We work with
dimer concentrations instead of monomer concentrations because MS2 coat
proteins are thought to be dimerized in
solution. <cit.> After mixing the RNA and coat
protein, we then wait 10 min to allow assembly to occur, after which we
add RNase to digest any excess MS2 RNA that is not encapsidated. We then
characterize the resulting assembly products with gel electrophoresis,
DLS, and TEM (see Section <ref>).
§.§ Results from gel electrophoresis
We first qualitatively characterize the size and composition of the
assembly products using agarose gel electrophoresis. We use both
ethidium stain to detect RNA and Coomassie stain to detect coat protein
in our samples. For comparison, we also characterize wild-type MS2, MS2
RNA, and digested MS2 RNA (see Section <ref>).
The most striking feature of the gel is a band that runs at the same
position as wild-type MS2 but with a brightness that increases from 2.5
to 7.5 μM coat-protein dimers and then suddenly decreases at
8.7 μM (see highlighted region in Fig. <ref>). We interpret
this increase and sudden decrease as follows. Near the stoichiometric
ratio (approximately 5 μM dimers to 50 nM RNA), well-formed VLPs
assemble, with more VLPs forming at higher protein concentration. Above
7.5 μM, the sharp decrease in brightness indicates that far fewer
well-formed MS2 VLPs assemble. Instead, as indicated by the spreading of
the band toward to the upper part of the gel, the assembly products at
dimer concentrations greater than 7.5 μM are larger than the
wild-type particles. These assembly products appear in both gels in
Fig. <ref>, indicating that they contain both RNA and protein.
We also see that at coat-protein dimer concentrations higher than
7.5 μM, the intensity of the diffuse band increases with increasing
concentration (Fig. <ref>). The increase in brightness and
change in the center position of this band suggest that the amount of
large assembly products increases at the expense of the wild-type-sized
products. At 15 μM, the diffuse band no longer overlaps with the
band corresponding to wild-type-size VLPs. For dimer concentrations
beyond 15 μM, some of the assembly products are so large that they
are trapped near the top of the agarose gel.
The transition from a bright to a diffuse band might represent a
transition from well-formed VLPs to either malformed structures or
aggregates of capsids. The gels by themselves cannot confirm
either hypothesis, since they reveal only that the assembly products all
contain RNA and that they increase in size with increasing coat-protein
concentration. We therefore turn to dynamic light scattering and
transmission electron microscopy experiments, as described below.
§.§ Results from dynamic light scattering
To quantify the sizes of the assembly products, we use DLS with
numerical inversion methods. These methods yield the size distributions
of assembly products in both number and volume bases (see
Section <ref>).
At coat-protein dimer concentrations 7.5 μM and below, we observe in
both the number and volume distribution a peak at or near the size of
wild-type MS2 particles (see shaded bands in Fig. <ref>; we
expect some variation in the location of this peak because the inversion
of the autocorrelation function is sensitive to noise). This peak is
accompanied by peaks at larger sizes, unlike the size distribution for
wild-type MS2, which consists of only one peak. At coat-protein dimer
concentrations above 7.5 μM, the peak corresponding to size of
wild-type MS2 particles decreases until it disappears (in the
volume-basis distributions) at 12.5 μM. At concentrations of 15
and 20 μM, we observe a single peak corresponding to much larger
assembly products. Overall, we observe that the average size of the
assembly products increases with increasing protein concentration
(Fig. <ref>).
The DLS data support our interpretation of the gel-electrophoresis data.
Specifically, both the DLS and gel data show that the proportion of VLPs
with sizes corresponding to the wild-type size decreases with
concentration above 7.5 μM, whereas only larger products form at
high concentration. The DLS data additionally show that the size of
these larger products is on the order of several hundred nanometers.
However, the DLS data also show peaks corresponding to particles larger
than wild-type at concentrations less than 10 μM. We do not see
evidence of such particles in the gel data. These peaks may correspond
to weakly-bound clusters of well-formed MS2 VLPs that are observable in
the DLS experiments but fall apart during gel electrophoresis (see
Fig. <ref>). Because DLS does not provide any structural
information, we turn to TEM to test this hypothesis and characterize the
structures of the assembly products.
§.§ Transmission electron microscopy (TEM) experiments
TEM images of negatively stained samples show that most of the assembly
products at dimer concentrations 7.5 μM and below are well-formed
MS2 VLPs (Figs. <ref> and <ref>), with
some malformed VLPs and clusters of MS2 VLPs, consistent with the larger
sizes present in the DLS-derived size distributions. At a concentration
of 10 μM, we observe malformed particles that consist of partially
formed capsids. These structures are similar to the so-called
“monster” particles observed in turnip-crinkle-virus
assemblies <cit.> and, more recently, in MS2
assembly experiments. <cit.> At concentrations
above 15 μM we observe what appear to be large aggregates of
partially formed capsids (Figs. <ref>
and <ref>). These structures are
micrometer-sized, comparable to the sizes seen in the DLS distributions
(Fig. <ref>).
§ DISCUSSION
Our measurements show that coat-protein concentration plays an important
role in the morphology of the assembly products of MS2 RNA and coat
protein. At low coat-protein dimer concentrations (less than
7.5 μM), gel electrophoresis, DLS, and TEM all point to the
formation of MS2 VLPs that are of the same size as wild-type MS2. These
structures appear to be well-formed, consistent with previous
studies. <cit.> At higher concentrations
(between 7.5 and 10 μM), we observe monster particles consisting of
a few partial capsids. At even higher concentration (12.5 μM),
results from gel electrophoresis, DLS, and TEM point to the formation of
large structures several hundred nanometers in size and containing many
partial capsids.
Whereas the observation of well-formed VLPs and even monsters is
consistent with previous studies on MS2, the observation of large
structures at high protein concentrations has not, to our knowledge,
been studied in detail. Large structures have been observed in the
assembly of viral coat proteins around functionalized gold
nanoparticles, but these structures are found at low protein
concentrations. <cit.> In other viruses, large
aggregates have been observed under conditions of strong
interactions. <cit.> Here,
however, the formation of the large structures occurs at the same buffer
conditions (apart from coat-protein concentration) as those used to
assemble well-formed VLPs.
The large structures are interesting not only because they contain many
partially formed capsids, but also because they contain RNA, as shown by
our gel electrophoresis measurements. Because the structures contain
both RNA and protein, we term them “condensates.” Below, we consider
several hypotheses that might explain the formation of the condensates,
with the aim of understanding what they reveal about the assembly
pathway of the virus.
One hypothesis is that the condensates arise primarily by aggregation of
coat proteins. However, gel electrophoresis, DLS, and TEM experiments
show no evidence of coat-protein aggregation in the absence of RNA, even
at 15 μM dimer concentration. In addition, gel electrophoresis data
at high coat-protein concentrations show that the condensates contain both
RNA and coat protein. While such structures might arise if the
aggregation of the coat proteins were rapid, trapping the RNA inside,
the absence of aggregation of coat protein at high concentrations is
evidence against this hypothesis.
Another hypothesis is that the RNA-protein condensates arise from an
en masse pathway, <cit.> in which the
interactions between the coat proteins and RNA are strong compared to
the inter-protein interactions. In this scenario, coat proteins would
first decorate the RNA, potentially leading to a heteroaggregate of RNA
and protein. This scenario would account for the presence of RNA in the
condensates. However, it is at odds with the observation of a
nucleation-and-growth pathway <cit.> at lower
concentrations. If nucleation and growth happens at low concentrations,
we expect that increasing the protein concentration should not cause a
transition to an en masse pathway but should instead primarily
change the nucleation rate.
We therefore consider the hypothesis that a nucleation-and-growth
pathway is operative at all coat-protein concentrations, and that this
pathway drives condensate formation at high concentrations. At low
concentrations, where the assembly products are well-formed VLPs, our
study provides no direct evidence for this pathway, but as noted above,
previous direct imaging measurements have shown that the assembly is
nucleated. <cit.> The nucleation-and-growth
pathway does, however, account for the monster particles seen at
intermediate protein concentrations. These structures, which consist of
multiple partial capsids, can form when more than one nucleation event
happens on a single RNA strand; indeed, we expect that the probability
of multiple nucleation events should increase with the coat-protein
concentration.
The question then is how the condensates form with the RNA trapped
inside. To understand whether and how a nucleated pathway might lead to
such condensates, we turn to simulations. We perform coarse-grained
patchy-particle simulations in which the capsomers are represented as
patchy hard disks, and the RNA is represented as a free polymer with a
length approximately 14 times the diameter of a fully formed capsid (see
Fig. <ref> and Section <ref>), such that each
polymer can be encapsidated by 12 capsomers. Although the experimental
system is more complex – specifically, an MS2 VLP consists of 90
coat-protein dimers, yielding a T=3 structure, and MS2 RNA can adopt
intricate secondary and tertiary structures – the simulation is
designed to test the hypothesis that nucleation and growth can lead to a
condensate. To this end, we tune the interactions so that the assembly
is nucleated, as seen in Fig. <ref>.
Whereas at low capsomer concentrations the simulations show the assembly
of well-formed capsids containing polymer, at high capsomer
concentrations they show the assembly of large networks of polymers and
partial capsids, just as in the experiments. Interestingly, the
simulations show that these networks consist of multiple polymer strands
that are bridged by networks of partial capsids
(Fig. <ref>). A partial capsid attached to one polymer
molecule can connect, through other capsomers, to a partial capsid
attached to a different polymer molecule. This observation provides a
plausible explanation for why the condensates seen in the experiments
can grow to be so large even in the absence of significant coat-protein
aggregation: the coat proteins may be able to bridge partial capsids on
different RNA molecules.
§ CONCLUSIONS
Our experiments and simulations show that all the morphologies we
observe as a function of coat-protein concentration – well-formed
capsids, monster capsids, and RNA-protein condensates – can be
understood as outcomes of a nucleation-and-growth process
(Fig. <ref>). Other hypotheses, including coat-protein
aggregation and en masse assembly, do not account for all of
our results.
In a nucleation-and-growth assembly pathway, the primary effect of
increasing the coat-protein concentration is to increase the nucleation
rate. If the growth rate depends more weakly on concentration than does
the nucleation rate, we can explain the formation of all the structures
we observe as follows. If the timescale of nucleation is short compared
to the time for a nucleus to grow into a full capsid, multiple nuclei
can form on the same RNA strand. When these nuclei grow, they do not in
general form a closed capsid, but instead form partial capsids. At
moderate concentrations, these partial capsids remain disconnected as
they grow, leading to monster capsids. At high concentrations, coat
proteins can bridge partial capsids on different RNA molecules, leading
to the formation of large RNA-protein condensates.
There remain a few questions to be resolved in future studies. One
question is how the RNA is spatially distributed in the condensates, and
specifically whether the bridging mechanism observed in the simulations
is operative in the experiments. Another question is what happens at
concentrations between those at which well-formed capsids form and
monster capsids form. DLS and TEM experiments suggest that at these
intermediate protein concentrations, small clusters of well-formed
capsids are present. The driving force for the formation of these
clusters is not clear, but they might arise when a single RNA molecule
spawns multiple nuclei that each form a full (or nearly full) capsid. In
this situation, the RNA would connect the capsids into a “multiplet”
structure. <cit.> It is still not clear why the DLS
measurements show evidence for small clusters of well-formed capsids but
the gel data do not. Fluorescent microscopy experiments could help
resolve this question and the aforementioned ones as well.
Our work might also inform models of the assembly pathway, particularly
those based on the law of mass action, <cit.> in which the
concentration of coat proteins plays a critical role. Further
experiments that quantify how the nucleation rate depends on the
coat-protein concentration would help connect these models to the
morphological observations we present here. From a more practical
perspective, our work helps establish constraints on concentration for
the production of MS2 VLPs. Such VLPs are used to encapsulate materials
for drug delivery <cit.>
and to display epitopes for vaccines. <cit.>
§ METHODS AND MATERIALS
All materials were used as received. Buffers were prepared as follows:
* Assembly buffer: 42 mM Tris, pH 7.5; 84 mM NaCl; 3 mM acetic acid, 1 mM EDTA
* TNE buffer: 50 mM Tris, pH 7.5; 100 mM NaCl, 1 mM EDTA
* TE buffer: 10 mM Tris, pH 7.5; 1 mM EDTA
* TAE buffer: 40 mM Tris-acetic acid, pH 8.3; 1 mM EDTA
§.§ Virus growth, cultivation, and storage
We purify wild-type bacteriophage MS2 as described by Strauss and
Sinsheimer. <cit.> In brief, we grow MS2 virus
particles by infecting E. coli strain C3000 in minimal LB
Buffer, and we remove E. coli cell debris by centrifugation
at 16700g for 30 min. We then use chloroform (warning: hazardous; use
in fume hood) extraction to purify the solute containing the virus. We
extract the purified virus particles by density gradient centrifugation
in a cesium chloride gradient. We store the purified virus at
4 °C at a concentration of 10^11 plaque-forming units
(pfu) in Tris-NaCL-EDTA or TNE buffer (50 mM Tris, 100 mM NaCL, 5 mM
EDTA) at pH 7.5. We determine the concentration of virus by
UV-spectrophotometry (NanoDrop 1000, Thermo Scientific) using an
extinction coefficient of 8.03 mL/mg at 260 nm.
§.§ Coat-protein purification and storage
We purify MS2 coat-protein dimers following the method of Sugiyama,
Herbert, and Hartmant. <cit.> Wild-type
bacteriophage MS2 is suspended in glacial acetic acid (warning:
hazardous; use in fume hood with appropriate personal protective
equipment) for 30 min to denature the capsid, separate it into protein
dimers, and precipitate the RNA. We then centrifuge the sample at
10000g and collect the supernatant, which contains coat-protein
dimers. We filter out the glacial acetic acid with 20 mM acetic acid
buffer through 3-kDa-MWCO sterile centrifugal filters (Millipore Sigma,
UFC500324) five times. This process removes the glacial acetic acid to
prevent further denaturing of the coat-protein dimers. We then determine
the concentration of our coat-protein dimers by measuring the absorbance
with the Nanodrop Spectrophotometer (Thermo Fisher) at 280 nm. We store
the MS2 coat protein at 4 °C in a 20 mM acetic acid buffer.
We measure the absorbance at 260 nm to detect residual RNA. In our
experiments, we use only purified protein with an absorbance ratio
(protein:RNA) above 1.5 to avoid RNA contamination.
§.§ RNA purification and storage
We purify wild-type MS2 RNA using a protocol involving a Qiagen RNeasy
Purification Kit Mini (Qiagen, 7400450). We take 100 μL of MS2
stored in TNE buffer and mix with 350 μL of buffer RLT (a lysis
buffer) to remove the coat-protein shell. We add 250 μL of ethanol
to our sample and mix to precipitate the RNA. We then transfer our
sample to a 2 mL RNeasy Mini spin column (provided by the Qiagen
Purification Kit) that is placed in a collection tube. We then
centrifuge at 10000g for 15 s and discard the flow-through. We add
500 μL of buffer RPE (to remove traces of salts) to the spin column
and centrifuge for 15 s at 10000g. We discard the flow-through. We
then add 500 μL of buffer RPE once more to the spin column and
centrifuge for 2 min at 10000g. We place the spin column upside down
into in a fresh 1.5 mL collection tube (provided in the purification
kit) to collect the RNA trapped in the spin column. We add 50 μL of
TE buffer to the spin column and centrifuge at 10000g for 1 min to
collect the RNA. We measure the RNA concentration using a Nanodrop
spectrophotometer by measuring the absorbance at 260 nm and using an
extinction coefficient of 25.1 mL/mg. We store the purified MS2 RNA at
-80°C in Tris-EDTA (TE) buffer at neutral pH (7.5).
§.§ RNA and coat-protein bulk assembly experiments
For assembly experiments, we mix wild-type MS2 RNA genome at a
concentration of 50 nM with varying concentrations of MS2 coat-protein
dimers ranging from 2.5 μM to 30 μM. We leave the mixtures at
room temperature (21 °C) for 10 min. Afterward, we add 10 ng
of RNase A to the sample and wait 30 min. We then characterize the
assembled virus-like particles using gel electrophoresis, dynamic light
scattering (DLS), and transmission electron microscopy (TEM).
§.§ Gel electrophoresis and analysis
For gel electrophoresis experiments, we mix 15 μL of sample with
4 μL of glycerol and load into a 1% agarose gel in assembly buffer
consisting of 5 parts Tris-NaCL-EDTA (TNE) buffer (50 mM Tris, 100 mM
NaCl, 10 mM EDTA, pH 7.5) to 1 part 20 mM acetic acid buffer. We use
Ethidium Bromide (EtBr; warning: hazardous; use in fume hood with
appropriate personal protective equipment) to stain the RNA and to
detect the presence of MS2 RNA. We use Coomassie Blue R-250 to detect
the presence of MS2 coat protein. The combination of these staining
methods allow us to confirm the presence of both MS2 RNA and MS2 coat
protein within the resulting assemblies. We place three control samples
in lanes 2 through 4 that include MS2 RNA at 50 nM concentration (lane
2), wild-type MS2 at 50 nM concentration (lane 3), and 50 nM
concentration of digested MS2 RNA genome (lane 4) resulting from the
addition of RNase A. These controls allow us to compare the sizes of our
assembly products to systems of known sizes. We can also determine
whether the samples consist of MS2 VLPs formed during assembly or excess
strands of MS2 RNA. We place our assembly products in lanes 6 through
19. These samples are loaded and run at 21 °C at 100 V for
40 min and visualized using a Biosystems UV Imager (Azure, AZ1280).
§.§ Dynamic light scattering (DLS) and analysis
We use dynamic light scattering (Malvern ZetaSizer Nano ZS by Malvern
Panalytical) to determine the size distribution of particles that
assemble at 50 nM MS2 RNA concentration and coat-protein dimer
concentrations of 2.5, 5, 7.5, 10, 12.5, 15, and 20 μM. In each case
the samples are treated with RNase as described previously. We also
characterize the wild-type virus for comparison. We determine the size
distributions using the regularization inversion method provided by the
instrument software. <cit.>
§.§ Transmission electron microscopy (TEM) and analysis
For transmission electron microscopy, we negatively stain samples that
have been assembled in bulk at coat-protein dimer concentrations of 5,
7.5, 10, 15, and 20 μM and treated with RNase A. We stain with 2%
aqueous uranyl acetate (warning: hazardous; use with appropriate
personal protective equipment) on 200 mesh carbon-coated copper TEM
grids (Polyscience, TEM-FCF200CU), then image with a Hitachi 7800 TEM
located at the Center for Nanoscale Systems at the Science and
Engineering Complex (CNS-SEC) at Harvard University. Images are taken at
20, 50, and 100 kV.
As a control, we mix 15 μM MS2 coat-protein dimers in assembly
buffer. This control is done to ensure that capsid-like or VLP-like
structures do not form in the absence of MS2 RNA.
§.§ Coarse-grained model for capsid assembly
We developed a patchy particle model for the capsomers interacting with
a polymer chain, which was used to model the RNA, to investigate their
assembly. A capsid is constructed from 12 subunits, each having C_5v
symmetry, where the center of each subunit sits on the vertex of an
icosahedron.<cit.>
§.§.§ Capsomer-Capsomer Interactions.
We coarse-grain the capsomeric building blocks as oblate hard
spherocylinders (OHSCs) decorated with five identical circular patches
conforming to C_5v symmetry. See Fig. <ref>
for a schematic illustration of the model capsomer. For hard oblate
spherocylinders, which were previously used as a model system to
investigate the phase behavior of discotic liquid
crystals, <cit.> the surface is defined by the points
at a distance L/2 from an infinitely thin disc of diameter
σ, giving the particle a total diameter D = σ + L and
thickness L. Note that an OHSC particle, comprising a flat cylindrical
core and a toroidal rim, has a uniaxial symmetry, whose orientation can
be described by a unit vector normal to the central disc,
𝐞̂. The aspect ratio of the OHSC particle is then given
by L^*=L/D. The pair interaction between two OHSC particles i and
j, with positions of the center of mass 𝐫_i and
𝐫_j and orientations 𝐞̂_i and
𝐞̂_j, respectively, is infinite if the shortest
distance between their central discs is less than L, and zero
otherwise:
v^ohsc_ij(𝐫_ij,𝐞̂_i,𝐞̂_j) =
∞ if d_ij<L
0 otherwise,
where 𝐫_ij = 𝐫_i - 𝐫_j and d_ij
is the shortest distance between the central discs for particles i and
j. We compute this shortest distance using the algorithm outlined in
Ref. .
We model the interactions between the circular patches by adapting the
Kern-Frenkel potential, <cit.> where the interactions between a
pair of circular patches are described by a square-well attraction
modulated by an angular factor corresponding to the relative
orientations between the patches. The angular factor is unity only when
the patches are oriented such that the vector connecting the centers of
the two particles passes through both the patches on their surfaces, and
zero otherwise. The width of the square well, δ_cap,
determines the range of the attraction between the patches relative to
the particle diameter. The depth of the square well,
ε_cap, governs the strength of the attractions. The
size of the patches is characterized by a half-angle θ. An
additional parameter φ defines the inclination of the plane that
contains the centers of the patches to the plane of the central
cylindrical core.
The total pair potential defining capsomer-capsomer interactions is then
v^cap_ij(𝐫_ij,𝐞̂_i,𝐞̂_j)
=
v^ohsc_ij(𝐫_ij,𝐞̂_i,𝐞̂_j)
+
∑^5_α,β v^sw,cap_αβ(𝐫_αβ)f(𝐫_αβ,𝐧̂_i,α,𝐧̂_j,β),
where r_ij=|𝐫_ij| is the center-to-center distance
between particles i and j, 𝐧̂_i,α is a unit
vector defining the orientation of patch α on particle i
(similarly, 𝐧̂_j,β is a unit vector corresponding to
patch β on particle j), and 𝐫_αβ is the
separation vector between the centers of patches α and β.
The term v^sw,cap_αβ is a square-well potential:
v^sw,cap_αβ(r_αβ) =
-ε_cap if r_αβ≤(1+δ_cap)σ
0 otherwise,
and
f(𝐫_αβ,𝐧̂_i,α,𝐧̂_j,β)
is the angular modulation factor,
f(𝐫_αβ,𝐧̂_i,α, 𝐧̂_j,β) =
1 if 𝐧̂_i,α·𝐫̂_αβ>cosθ
and 𝐧̂_j,β·𝐫̂_βα>cosθ
0 otherwise.
The reference orientation of particle i is such that the normal to the
flat face of the oblate spherocylinder is aligned with the z-axis of
the global coordinate frame. We then define the reference position of
the first patch on particle i as 𝐩_i,1=(σ/2,0,0) and
the position of each other patch as a rotation about the z-axis of the
local coordinate frame of the particle such that
𝐩_i,n=𝐑_ψ·𝐩_1, where
𝐑_ψ is a rotation matrix defining a clockwise rotation
of angle ψ=n2π/5 about 𝐞̂_i with n=2,3,4,5.
The orientation of patch α on particle i is then
𝐧̂_i,α=sin(φ)𝐞̂_i+(2cos(φ)/σ)𝐩_α,
where φ is the angle between 𝐧̂_i,α and
the plane containing the flat face of the oblate spherocylinder.
§.§.§ Polymer-Polymer Interactions.
Each RNA molecule is modeled as a flexible self-avoiding polymer – that
is, as a chain of hard-spheres, where neighboring beads in the chain are
connected by a harmonic spring: <cit.>
v_poly(r_ij) = κ(r_ij-σ_bl_b)^2,
where r_ij is the distance between beads i and j (where
j=i-1,i+1), κ sets the strength of the harmonic spring,
σ_b is the hard-sphere diameter of the beads in the polymer
chain, and l_b is a dimensionless parameter setting the equilibrium
bond length between neighboring beads.
§.§.§ Capsomer-Polymer Interactions.
We allow for interaction between the capsomers and the polymer
via an attractive patch on the surface of the capsomer. The
orientation of the patch is aligned with that of the oblate
spherocylinder. The beads of the polymer and the capsomer then interact
via an attractive square-well interaction, plus a hard-core
repulsion between their respective cores. The pair interaction when
particle i is a capsomer and particle j is a bead of a
polymer chain is
v^cap-pol_ij(𝐫_ij,𝐞̂_i) = v^hc_ij(𝐫_ij,
𝐞̂_i) + v^sw,cap-pol_ij(𝐫_ij)g(𝐫_ij,𝐞̂_i),
where v^hc_ij is the hard-core interaction
v^hc_ij(𝐫_ij,𝐞̂_i) =
∞ if d_ij<(L+σ_b)/2
0 otherwise,
where d_ij is the shortest distance between the capsomer and polymer
bead. We compute this distance by first computing the projection of the
polymer bead onto the plane spanned by the cylindrical core of the
capsomer:
𝐫^proj,i_j=𝐫_ij-(𝐫_ij·𝐞̂_i)𝐞̂_i. Then if
r^proj,i_j≤σ/2, the bead lies over the cylindrical
core of the capsomer, so the shortest distance vector between the two
particles is
𝐝_ij=𝐫_ij-𝐫^proj,i_j.
Otherwise, the closest point of the capsomer to the bead lies on its
edge. The shortest distance vector between the two particles is then
𝐝_ij=𝐫_ij-(σ/2)𝐫̂^proj,i_j.
The term v^sw,cap-pol_ij is the square-well interaction
between the patch on the face of the capsomer and the polymer bead:
v^sw,cap-pol_ij(𝐝_ij) =
-ε_cap-pol if d_ij≤(1+δ_cap-pol)σ
0 otherwise,
and g(𝐫_ij,𝐞̂_i) is the angular modulation
factor for the attractive capsomer-polymer interaction:
g(𝐫_ij,𝐞̂_i) =
1 if cos^-1(𝐫_ij·𝐞̂_i/r_ij) < π/2
0 otherwise.
§.§ Monte Carlo simulations
We carry out two sets of Monte Carlo simulations in the NVT
ensemble using the model outlined above. For both simulations, we set
the volume to be V=600000σ^3, the reduced temperature to be
k_BT/ε_cap=0.12 (where k_B is
the Boltzmann constant, which is taken to be equal to one), and the
number of polymer chains N_poly=30, with each polymer chain
consisting of l_poly=150 beads. In one simulation, there are
N_cap=360 capsomers, and in the other, there are
N_cap=1500 capsomers. The total number of particles is then
N=N_polyl_poly+N_cap.
We take σ to be the unit of length and ε_cap
to be the unit of energy. We then choose the parameters defining the
system to be L=0.5σ, δ_cap=0.2,
θ=25^∘, φ=25^∘,
κ=100ε_cap, σ_b=0.2σ,
l_b=1.05σ_b,
ε_cap-pol=0.2ε_cap, and
δ_cap-pol=0.3σ. The geometry of the patches on the
capsomers is chosen to ensure that the particles can stabilize a
capsid-like structure where 12 subunits are fully connected and sit on
the vertices of an icosahedron. The choice of the aspect ratio of the
OHSC particles ensures that the cavity of a properly formed capsid can
accommodate cargo of a reasonable size. In turn, the length of each
polymer chain is chosen to be as long as possible with the constraint
that it still fit inside a capsid made of 12 capsomers. Additionally,
the strength of the polymer-capsomer interactions is chosen to ensure
that capsid growth proceeds through a nucleated pathway.
We carry out all Monte Carlo simulations with systems contained in a
cubic box under periodic boundary conditions, using the minimum image
convention. Each capsomer is treated as a rigid body for which the
orientational degrees of freedom are represented by quaternions. The
potential energy is calculated using a spherical cutoff of 1.7σ,
and a cell list is used for efficiency. Each Monte Carlo cycle consists
of N translational or rotational single-particle or cluster moves,
chosen at random with equal probabilities.
§ AUTHOR CONTRIBUTIONS
All authors conceived the research goals and aims. LAW and AN curated
data and code. LAW and AN analyzed the results, with assistance from VNM
and DC. RFG, DC, and VNM obtained funding. LAW performed the experiments
and AN performed the simulations. All authors developed methods. RFG,
DC, and VNM administered the project. AN developed the computer software
for the simulations. RFG, DC, and VNM supervised the project. LAW and AN
performed validation studies. LAW and AN prepared visualizations, edited
by VNM. LAW, AN, and VNM prepared the original draft. All authors
reviewed and edited the submitted work.
§ CONFLICTS OF INTEREST
There are no conflicts to declare.
§ ACKNOWLEDGEMENTS
We thank Amy Barker and Peter Stockley at the University of Leeds for
initial stocks of MS2 and E. coli cells. We thank Tim Chiang,
Amelia Paine, Aaron Goldfain, and Danai Montalvan for helpful scientific
discussions. This research was partially supported by a National Science
Foundation (NSF) Graduate Research Fellowship under grant number
DGE-1745303, by NSF through the Harvard University Materials Research
Science and Engineering Center under NSF grant number DMR-2011754, by
the National Institute of General Medical Sciences of the National
Institutes of Health under grant numbers K99GM127751 and R00GM127751, by
the NSF-Simons Center for Mathematical and Statistical Analysis of
Biology at Harvard University under NSF grant number 1764269, and by the
Harvard Quantitative Biology Initiative. AN, VNM, and DC gratefully
acknowledge support from the Institute of Advanced Studies of the
University of Birmingham and the Turing Scheme. This work was performed
in part at the Harvard University Center for Nanoscale Systems (CNS), a
member of the National Nanotechnology Coordinated Infrastructure Network
(NNCI), which is supported by the National Science Foundation under NSF
grant number ECCS-2025158. The work was also performed in part at the
Harvard University Bauer Core Facility. Any opinion, findings, and
conclusions or recommendations expressed in this material are those of
the authors and do not necessarily reflect the views of the National
Science Foundation.
rsc
|
http://arxiv.org/abs/2307.04122v1 | 20230709082919 | Enhancing Low-Light Images Using Infrared-Encoded Images | [
"Shulin Tian",
"Yufei Wang",
"Renjie Wan",
"Wenhan Yang",
"Alex C. Kot",
"Bihan Wen"
] | cs.CV | [
"cs.CV",
"eess.IV"
] |
Bounced Model of Droplet on Moving Substrate
Chengwu Liu
August 12, 2023
============================================
Low-light image enhancement task is essential yet challenging as it is ill-posed intrinsically.
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss, which limits the capacity of recovering the brightness, contrast, and texture details due to the small number of income photons.
In this work, we propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter, which allows for the capture of more photons and results in improved signal-to-noise ratio due to the inclusion of information from the IR spectrum. To verify the proposed strategy, we collect a paired dataset of low-light images captured without the IR cut-off filter, with corresponding long-exposure reference images with an external filter.
The experimental results on the proposed dataset demonstrate the effectiveness of the proposed method, showing better performance quantitatively and qualitatively. The dataset and code are publicly available at
Low-light enhancement, infrared photography, computational photography
§ INTRODUCTION
Due to the small number of photons captured by the camera, the images captured under low-light environments usually suffer from poor visibility, intense noise, and artifacts. To enhance the visibility of the images captured in low-light environments, previous works mainly focus on modelling the mapping
relationship between low-light images and corresponding normally-exposed images.
Specifically, current deep learning based methods have the following paradigms: learning an end-to-end model using paired datasets in <cit.>; GAN-based networks in <cit.>; encoder-decoder based models in <cit.>. However, the aforementioned methods are all based on existing visible information of the corrupted inputs on RGB space,
i.e., even if they can achieve pleasant perceptual quality, they can not perform reliably due to the lack of incident photons <cit.>.
Besides, there are various limitations of the current mainstream methods, e.g.,
end-to-end training using pixel reconstruction loss leads to a regression-to-mean problem; GAN-based training requires careful hyper-parameter tuning and lacks enough supervision for noise removal.
Recently, infrared-light-based methods have attracted great attention in low-level computer vision tasks
as they introduce extra information from infrared spectroscopy.
There are several works explored the usage of infrared light in computation photography previously. Specifically, Zhuo et al. <cit.> propose to use additional Near-Infrared (NIR) flash images instead of normal flash images to restore the details of noisy input images that require the user to take two photos of the same scene in a static environment, causing the misalignment of the inputs easily;
Zhang et al. <cit.> propose a dual-camera system to capture a NIR image and a normal visible image of the same scene concurrently, while increasing the cost of devices during the acquisition of data.
In this paper, we propose a novel prototype that utilizes information from the infrared spectrum without the need for additional devices. Most solid-state (CCD/CMOS) based digital cameras are equipped with IR cutoff filters to avoid color distortion caused by the high sensitivity to IR light. Conversely, we remove the IR cutoff filter so that the CMOS can receive more incident photons located on the infrared spectrum, resulting in increased brightness, higher signal-noise ratio, and improved details as shown in Fig. <ref>. A paired dataset, namely IR-dataset, of IR-RGB images captured under low-light environments and their reference normally-exposed RGB images, is collected under different scenes. We further propose a novel flow-based model that can enhance visibility by modelling the distribution of normally-exposed images and address color distortion caused by the lack of IR cutoff filter through our proposed color alignment loss (CAL).
In summary, the contributions of our work are threefold:
*
We collect a paired dataset under a novel prototype, i.e., IR-RGB images captured under low-light environments and their normally-exposed reference RGB images, which supports future studies.
* We propose a flow-based model with our proposed color alignment loss, which can effectively address the color distortion caused by removing the IR-cut filter.
* We conduct extensive experiments on our collected datasets that demonstrate removing the IR-cut filter can lead to better-quality restored images in low-light environments. Besides, our proposed framework achieves superior performance compared with SOTA methods.
§ METHODOLOGY
§.§ Dataset Collection
The dataset is collected by a modified Nikon D3300 camera, in which the internal IR cut-off filter is removed.
The paired images are captured using a stable tripod and remote control to minimize misalignment. The low-light images are captured using the aforementioned device without IR cut-off filter. To capture the normally-exposed reference images in the visible light spectrum, an external IR filter, which has the same cut-off wavelength as the internal one, is carefully put in front of the lens to ensure that no camera shift occurs during the long exposure. To better explore the effectiveness of removing the IR cut-off filter in a low-light environment, we also collect a set of low-light images in the visible light spectrum (e.g., the example in Fig. <ref>).
We divide our dataset into a training set and an evaluation set. Specifically, the training set includes 236 pairs of low-light images without cut-off filter and their corresponding reference images (472 images in total). The evaluation set has 80 pairs of low-light images with and without the cut-off filter and their corresponding reference images.
§.§ Preliminary
Previously, the mainstream of deep learning based models is mainly based on pixel reconstruction loss. However, due to the limited capacity to distinguish the unwanted artifacts with the real distribution of normally-exposed images, they may lead to unpleasant visual quality with blurry outputs <cit.>.
Inspired by the extraordinary performance of flow-based models <cit.>, we found that learning conditional probability distribution can handle the aforementioned problem by including possibilities of various distributions of natural images. Specifically, the recent state-of-the-art LLFlow model <cit.> has shown great performance in using normalizing flow conditioned on corrupted inputs to capture the conditional distribution of normally exposed images. In this work, we inherited the
core idea of conditional flow with the likelihood estimation proposed in <cit.> as the backbone of our method.
The conditional probability density function of normally exposed images can be modified as follows:
f_cond(y|x) = f_z(Θ(y;x))|∂Θ/∂ y(y;x)|,
where Θ(·) is the invertible network with N invertible layers {θ ^1, θ ^2, …, θ ^N}, and the latent representation z=Θ(y;x) is mapped from the corrupted inputs x normally exposed images y. By characterizing the model with maximum likelihood estimations, the model can be optimized with the negative log-likelihood loss function:
ℒ_nll (x, y) = -log f_cond(y|x)
= -log f_z(Θ (y; x))
- ∑_n=0^N-1log |∂θ^n/∂ z^n(z^n; g^n(x_l))|,
where g(·) is the encoder that outputs conditional features of the layers θ ^i from the invertible network.
§.§ Color Alignment Loss
Although the benchmarks performed well on the visible light spectrum, the performance suffered from severe degradation caused by the additional infrared light in some extreme cases if we directly apply benchmark methods to the collected dataset. To further alleviate the color distortion caused by removing the IR filter, inspired by histogram-matching techniques studies <cit.>, used by remote sensing, we propose to minimize the divergence of the color distribution between the generated images and reference images. Specifically, by representing the color information using differentiable histograms in the RGB color channels, we emphasize more on the color distributions of the generated and reference images instead of the local details. To further measure the differences in these distributions, we propose using the Wasserstein distance, which can provide a more stable gradient compared with the commonly used KL divergence. The details are as follows:
§.§.§ Differentiable Histogram
Since the low-light images are taken without the existence of an IR cut-off filter, they admit more red light, which leads to color bias in the red channel. To suppress the color distortion, we propose to minimize the divergence of the channel-wise differentiable histogram between the generated and reference images.
Assume that x∈ℝ^C × H × W is an image where C, H and W refer to its number of channels, height, and width respectively.
To calculate its channel-wise histogram bounded by an arbitrary range [a;b], we consider fitting the histogram with uniformly spaced bins with size R, noted by nodes t_i ∈{t_1 = m, t_2, …, t_R = n}, where step size Δ = (a-b)/R-1. By matching the pixel values of different channels of the image to the histogram nodes, the value h_r of the histogram H at each node then be calculated as:
h_r = ∑_C1/1+δ*(p_i,j-t_r)^2, r = 1,2,…, R
where δ is a constant scaling factor. After collating and normalizing h_r, we could get the final one-dimensional histogram H(x) with size R on different channels.
§.§.§ Wasserstein Metric
Inspired by Wasserstein distance (W-distance) to measure the distance between distributions on a given metric space <cit.>, we propose to optimize the histograms of images using W-distance as follows
W_p (H_ŷ, H_y) = inf_ŷ∼ H_ŷ, y∼ H_y(𝔼||ŷ-y||^p)^1/p,
where H_ŷ and H_y denote differentiable histograms of the restored image ŷ and ground-truth image y respectively through Eq. (<ref>).
An explicit formula can be obtained since the dimension of the variable is 1 as follows,
W_p (H_ŷ, H_y) = ||F_ŷ^-1 - F_y^-1||_p
= (∫_a^b |F_ŷ^-1(α) - F_y^-1(α)|^p dα)^1/p,
where F_y and F_ŷ are the cumulative distribution of H_y and H_ŷ respectively. It could be further simplified when p=1 and the variable is discrete:
ℒ_CA = W_1(H_ŷ, H_y)
= ∑_ℝ |F_ŷ(t)-F_y(t)|d t.
The negative log-likelihood and the color alignment loss jointly define the total loss as follows
ℒ = ℒ_nll + λ·ℒ_CA,
where λ is a weighting constant to adjust the scales of color alignment loss for specific settings.
§ EXPERIMENTS
§.§ Experimental settings.
All the captured images are resized to the resolution of 400×600 for training and testing.
For our model, the weighting factor λ of CAL is set to 0.01 to cast the loss component value onto a similar numerical scale during training;
to simplify the task, we bound the range of the channel-wise histogram values to [0.0;1.0], and the bin size is set to 64 per channel.
§.§ Evaluations results.
To evaluate the performance of different methods on the proposed dataset, we retrain all the methods using the same training data, i.e., the training set of our proposed dataset. For a fair comparison, we explore training hyper-parameters of competitors in a wide range and report the best performance we obtained. We report the experimental results in Table <ref> and visual comparison in Fig. <ref>.
Based on our evaluation and analysis of the experiment results
As we can see in the table, Retinex-theory-based methods exhibit limited generalization ability and unpleasant outputs, e.g., RetinexNet <cit.>, Kind <cit.>, KinD++ <cit.>. We conjecture the reason is that the aforementioned methods assume the existence of an invariant reflectance map across low-light inputs and ground truth images and require a shared network to extract both illumination and reflectance maps of them
, which is not feasible in our setting. Besides, our method achieves the best performance among all competitors in terms of both fidelity and perceptual quality.
§.§ Ablation Study
1) Effectiveness of removing IR cut-off filter. To further verify the effect of removing the internal IR cut-off filter, we compare both quantitative and visual results that were restored from standard RGB space and IR-RGB space separately. For the models evaluated on the visible light spectrum, we utilize the pretrained/released models from SOTA methods trained on a large-scale dataset so that they have good generalization ability to different scenarios.
As shown in Table <ref>, the quantitative results calculated from IR light encoded image with our model are much higher than those directly restored from standard visible light spectrum. Besides, for the same method, especially for the method utilizing fully supervised training manner, there exists an obvious performance gap by converting the input space from IR-visible spectrum to only visible spectrum, which demonstrates that removing the IR cut-off filter may lead to the higher noise-signal ratio in extreme dark environment.
Besides, as shown in Fig. <ref>, the reconstructed image with IR light performs better in recovering local features and details of the image.
2) The effectiveness of color alignment loss. To validate the assumption of using color alignment loss can improve the imaging quality, we compare the visual quality difference of the usage of color alignment loss. As shown in Fig. <ref>, the result with CAL shows better perceptual quality with aligned color correctness and higher contrast. However, the original method without CAL appears to have obvious color distortion and blurry edges.
§ CONCLUSION
In this paper, we present a novel strategy for tackling low-light image enhancement tasks which introduces more income photons in the IR spectrum. The proposed prototype leads to a higher noise signal ratio in the extreme-dark environment. Based on the proposed prototype, a paired dataset is collected under different scenarios.
Experimental results on the proposed dataset show our method achieves the best performance in both quantitative results and perceptual quality. Our prototype shed light on the potential new designs for the digital cameras by exploiting the spectroscopic information captured from infrared light spectrum, providing better image quality with more practical solutions for customers.
IEEEbib
|
http://arxiv.org/abs/2307.04280v1 | 20230709231208 | Shaping the Emerging Norms of Using Large Language Models in Social Computing Research | [
"Hong Shen",
"Tianshi Li",
"Toby Jia-Jun Li",
"Joon Sung Park",
"Diyi Yang"
] | cs.HC | [
"cs.HC"
] |
Large Language Models in Social Computing Research]Shaping the Emerging Norms of Using Large Language Models in Social Computing Research
[email protected]
Carnegie Mellon University
Pittsburgh
PA
United States
[email protected]
Carnegie Mellon University
Pittsburgh
PA
United States
[email protected]
University of Notre Dame
Notre Dame
IN
United States
[email protected]
Stanford University
Stanford
CA
United States
[email protected]
Stanford University
Stanford
CA
United States
The emergence of Large Language Models (LLMs) has brought both excitement and concerns to social computing research. On the one hand, LLMs offer unprecedented capabilities in analyzing vast amounts of textual data and generating human-like responses, enabling researchers to delve into complex social phenomena. On the other hand, concerns are emerging regarding the validity, privacy, and ethics of the research when LLMs are involved. This SIG aims at offering an open space for social computing researchers who are interested in understanding the impacts of LLMs to discuss their current practices, perspectives, challenges when engaging with LLMs in their everyday work and collectively shaping the emerging norms of using LLMs in social computing research.
<ccs2012>
<concept>
<concept_id>10003120.10003130.10003131</concept_id>
<concept_desc>Human-centered computing Collaborative and social computing theory, concepts and paradigms</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Human-centered computing Collaborative and social computing theory, concepts and paradigms
[
Diyi Yang
August 12, 2023
===================
§ BACKGROUND
The development of large language models (LLMs) such as ChatGPT has brought both excitements and concerns to the field of social computing. On the one hand, LLMs present important opportunities that leverage the vast amount of human behavioral data captured in the model <cit.> to analyze and augment interactions in social computing systems. For instance, LLM-powered tools have been shown to assist researchers to more efficiently analyze textual data <cit.>, replicate social science studies <cit.>, and enable new ways to prototype emergent social dynamics in social computing systems where it is infeasible or dangerous to conduct in the wild studies <cit.>.
On the other hand, concerns about applying LLMs to social computing research <cit.> that parallel previous discussions on privacy and consent in computational social science research <cit.> have also arisen. In particular, we focus on three key themes: validity, privacy and ethics. Can we ensure the validity of findings generated by a non-deterministic black-box model whose output may depend on the nuances captured in the input prompts? Can we protect the privacy of the subjects whose data maybe captured in the models' training data? And can we put in place proper guardrails that will encourage ethical application of LLMs in social computing research in the face of their risks, including but not limited to their potential misuse of LLMs for manipulation or deception?
The primary goal of this Special Interest Group (SIG) is to provide an inclusive platform for social computing researchers who wish to explore the implications of LLMs in their day-to-day research activities. In particular, we aim to explore the following questions:
* How to better design online studies to effectively prevent LLM-based spams? For example, how to prevent, recognize and filter out spammers who use LLMs to fill out online surveys?
* How to utilize LLMs to analyze human-generated data and effectively evaluate their performance? For example, how to construct ground truth when using LLMs in analyzing qualitative data (e.g., open-ended survey responses)? What are the evaluation metrics should we use (e.g., shall we calculate inter-coder reliability)?
* How to accurately document the utilization of LLMs in research while simultaneously acknowledging its inherent limitations and biases in an effective manner?
* How to preserve the privacy of the study data when we use LLMs in data analysis? For example, how to effectively remove Personal Identifiable Information (PII) from survey responses, interview transcripts, and data scraped from the web?
* How to address ethical concerns related with using LLMs in social computing? For example, how to craft informed consent to inform study participants the potential of using LLMs in the study design? How to ethically use dataset containing real-world LLMs usage data in research?
* How to mitigate the equity concerns associated with the substantial cost, computational resources, and technical expertise required for employing LLMs, considering the unequal access to these resources among different research teams?
In an attempt to ground the above questions in a more concrete manner, below we discuss the impacts of LLMs in social computing research on the following four aspects: Data collection, data generation, data analysis as well as system deployment and evaluation.
§ THE IMPACT OF LLMS ON DATA COLLECTION
Social computing researchers are already experiencing the impacts of LLMs in their work. One particular area is during the data collection stage. On the one hand, the rapidly advancing capabilities of LLMs to mimic human behaviors have presented numerous promising opportunities. For instance, researchers can leverage LLMs to generate hypothetical scenarios (e.g., vignettes) to collect data from human participants. Additionally, LLM-based agents can also be introduced into multiplayer games, opening up new avenues for studying human behavior in interactive settings <cit.>.
However, these capabilities also give rise to a variety of concerns. For example, researchers have shown that chatbots powered by LLMs can effectively mimic survey respondents with diverse backgrounds <cit.>. This poses a challenge for researchers engaged in online studies, as it becomes increasingly difficult to differentiate between AI-based spammers/bots from genuine human participants. Traditional methods used to identify and filter out spammers may no longer be effective in this context. Moreover, the introduction of LLMs in different parts of the research design, including using LLMs to analyze human-generated data and/or using LLMs to simulate human behavior, will also likely need an update on the consent process. What is the best practices to craft informed consent with human participants when LLMs is involved?
§ THE USE OF LLMS IN GENERATING SYNTHETIC DATA
LLMs capture the human behaviors that are represented in their training data <cit.>, and as such, these models can replicate these behaviors when prompted. Recent studies have shown that the human behavior generated by these models is qualitatively believable <cit.> and, at times, accurate enough to replicate some social science studies <cit.> and surveys <cit.>. This capacity for the model to generate human behavior offers opportunities to enable new ways of studying and augmenting social computing systems. For instance, these models can allow the designers of a social system to prototype the social dynamics that only emerge at scale, to iterate without exposing the users to potentially flawed system design. They can also bring about new ways of conducting computational social science by replicating results that were only achievable via crowd participants or empirical studies. However, the applications of LLMs inherit the imperfections and biases of the underlying model. Their output might depend on the subtle nuances of a prompt, while their biases might misrepresent certain populations. We posit that our community will need to continuously validate and benchmark the use of LLMs in social computing while emphasizing the importance of directly connecting with human stakeholders.
§ THE USE OF LLMS IN ANALYZING DATA
At the data analysis stage of social computing research, we are witnessing emerging use of LLMs (and recently, large pre-trained ones) in both qualitative and quantitative methods.
In qualitative research, academic tools such as PaTAT <cit.> and CollabCoder <cit.> as well as commercial products such as the AI Coding feature in ATLAS.ti[<https://atlasti.com/ai-coding>] utilize LLMs to aid in the qualitative coding process for textual data. Generally, these tools employ LLMs to analyze data, propose new codes for inductive coding procedures, understand the semantics of codes as users create and assign them, and suggest codes for data items. They also assist with the sensemaking process by summarizing, synthesizing, aggregating, or visualizing data.
For quantitative methods, AI and LLM-enabled assistants and “pair programmers” have been developed to recommend analyses and statistical procedures based on data characteristics, conduct data queries, and implement data analysis code in response to user prompts in exploratory data science scenarios <cit.>. Commercial products such as Tableau AI[<https://www.tableau.com/solutions/ai-analytics>] suggest metrics based on the data domain, generate insights, and allow users to ask questions about the underlying data using natural language.
Despite their potential in improving the efficiency of the analysis process, facilitating additional insight discovery, and reducing learning curves, the use of LLMs also presents new challenges and raises concerns regarding their application in social computing research. For instance, LLMs have been found to display biases and stereotypes in their outputs <cit.>, which could influence the data analysis process, especially in domains where understanding the socio-cultural context is essential for data interpretation. Moreover, the collaboration between human researchers and LLM-enabled tools in analysis tasks poses challenges in ensuring user autonomy, preventing over-reliance, and promoting effective human learning about data and patterns, this challenge can be amplified by the lack of interpretability in LLMs <cit.>. The application of LLMs in data analysis introduces additional data privacy challenges since many LLMs lack transparency regarding their usage of user data. This raises questions about creating informed consent protocols to notify participants about the use of LLMs in analyzing their data. Furthermore, the community needs new guidelines and norms regarding evaluation metrics when LLMs are used alongside human coders in data analysis.
§ THE METHODS FOR DEPLOYING/STUDYING LLM-ENABLED SOCIO-TECHNICAL SYSTEMS
The success of LLMs is going to have a profound impact on socio-technical systems in various domains that humans use natural languages to interact with.
Over the past few months, news articles have covered systems built with LLMs used for mental health support[<https://gizmodo.com/mental-health-therapy-app-ai-koko-chatgpt-rob-morris-1849965534>], education[<https://fortune.com/2023/02/22/chatgpt-ai-openai-educatoin-tutor-teaching-school/>], legal services[<https://gizmodo.com/donotpay-speeding-ticket-chatgpt-1849960272>], job searching advice[<https://www.forbes.com/sites/jackkelly/2023/04/03/how-to-leverage-ai-and-use-chatgpt-in-your-job-search-according-to-rsum-writers-and-career-coaches/?sh=728117a5ac5a>], and many other purposes.
While they demonstrate exciting opportunities of advancing these fields, concerns and backlashes have been raised by the public.
For example, in January 2023, a company called Koko that offers mental health services tested responses generated by GPT-3 on thousands of its users.
A co-founder tweeted about their experiments and it soon sparked a heated discussion around the ethics of this research.
People questioned their informed consent process, the legitimacy of testing an unproven technology on real users, and even the appropriateness of involving AI in such a process at all.
Although the co-founder later clarified that Koko users knew the messages were co-written by a bot, it did not resolve all these concerns.
Researching systems built with LLMs is a delicate process.
However, the lack of clear guidelines for conducting research in this field will affect both researchers who design, develop, and deploy socio-technical systems powered by LLMs, and researchers who conduct empirical studies to investigate how real-world users interact with these systems.
In this SIG, we aim to take the first step towards the development of the guidelines.
The questions that need in-depth discussions include but are not limited to the following.
For researchers who aim to deploy a novel LLM-enabled system, how should they determine whether AI-based intervention is appropriate for the selected use case?
How should they disclose the use of LLMs to their users?
Deploying certain services (e.g., mental health support) may inevitably lead the users to expose sensitive information about themselves and other people (i.e., interdependent privacy <cit.>).
How should researchers process traces from the study that may involve such sensitive information?
Relatedly, the natural language interfaces give users a great amount of flexibility, which means the LLM-enabled services may not have definite use scenarios (e.g., ChatGPT) or the users may use them in unexpected ways.
Hence, researchers who want to study the use of LLM-enabled systems are facing challenges in handling unexpected privacy harms (e.g., economical, reputational, psychological harms <cit.>), which may affect the choice of tools for analysis (e.g., local vs. cloud-based tools).
§ CONCLUSION
The rapid development of Large Language Models (LLMs) has already had a significant impact on various aspects of social computing research, encompassing areas such as data collection, data generation, data analysis, as well as system deployment and evaluation. However, alongside the excitement surrounding these advancements, concerns have emerged regarding issues of validity, privacy, and ethics. This Special Interest Group (SIG) aims to provide a much-needed space for researchers who are interested in comprehending the impacts of LLMs on their work. It offers an opportunity for the members of the community to openly discuss their current practices, perspectives, and challenges when engaging with LLMs in their day-to-day activities and collectively shaping the emerging norms of LLMs-impacted social computing research.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.04044v1 | 20230708204524 | When greediness and self-confidence meet in a social dilemma | [
"Chaoqian Wang",
"Wenqiang Zhu",
"Attila Szolnoki"
] | physics.soc-ph | [
"physics.soc-ph",
"cond-mat.stat-mech",
"cs.GT",
"nlin.CG"
] |
1
.001
Chaoqian Wang et al.
mode = title]When greediness and self-confidence meet in a social dilemma
1]Chaoqian Wang
[email protected]
Conceptualization; Methodology; Writing
2]Wenqiang Zhu
Methodology; Validation
3]Attila Szolnoki
[1]
[cor1]Corresponding author
[email protected]
Conceptualization; Validation; Writing
[1]Department of Computational and Data Sciences, George Mason University, Fairfax, VA 22030, USA
[2]Institute of Artificial Intelligence, Beihang University, Beijing 100191, China
[3]Institute of Technical Physics and Materials Science, Centre for Energy Research, P.O. Box 49, H-1525 Budapest, Hungary
A greedy personality is usually accompanied by arrogance and confidence. This work investigates the cooperation success condition in the context of biased payoff allocation and self-confidence. The first component allows the organizer in a spatial public goods game to receive a different proportion of goods than other participants. The second aspect influences the micro-level dynamics of strategy updates, wherein players can maintain their strategy with a certain weight. Analytical results are obtained on square lattices under the weak selection limit. If the organizer attempts to monopolize the public goods, cooperation becomes more attainable. If the confidence increases, cooperation is inhibited. Consequently, these elements have conflicting effects on cooperation, and their simultaneous presence can result in a heterogeneous change of the critical synergy factor. Our theoretical findings underscore the subtle implications of a mutual trait that may manifest as greediness or self-confidence under different circumstances, which are validated through Monte Carlo simulations.
* Examining biased allocation and self-confidence in spatial public goods game
* Calculating cooperation success conditions in weak selection limit
* Conflicting effects yield a non-monotonic critical synergy factor
* Analytical results validated via Monte Carlo simulations
Public goods game Weak selection Biased allocation Self-confidence Evolutionary game theory
[
[
=====
§ INTRODUCTION
The dynamism of various facets of reciprocity—be they direct, indirect, or network reciprocity—have been unequivocally demonstrated to wield significant influence over system behaviors, particularly when there is a need to sustain costly cooperation among self-interested, or more crudely put, selfish agents <cit.>. These mechanisms, chiefly concerned with pairwise interactions among players, have been observed to incorporate higher-order interactions <cit.>. The public goods game (PGG) is an illustrative example of such complex interactions, involving simultaneous decision-making processes through multi-body or group interactions <cit.>. Players may opt to contribute or abstain from contributing to a common pool, reaping the benefits of the overall contributions regardless of their individual decisions. In a spatial population, where players engage in limited yet enduring interactions with others, reciprocity manifests on an additional level <cit.>. Here, the intricate web of relations among agents means a player is not limited to a single game, but finds themselves immersed in several others. A pragmatic approach for a player would be to partake in the group where they serve as the central agent, encircled by proximate neighbors. Concurrently, said player also engages in games instigated by their neighbors. Consequently, a player positioned on a node with a k degree finds themselves partaking in G=k+1 PGGs. This setup could potentially underpin a reciprocal mutual aid system which promotes a degree of cooperation.
Assuming the most rudimentary scenario where players consistently maintain their strategies across all the games they participate in and disregard strategy diversity <cit.>, there still exists considerable flexibility in the implementation of a realistic model. To elaborate, groups do not necessarily correspond to a player, who may be more incentivized to invest effort in a venture they have personally initiated. Such dedication could be recognized and appreciated by the others. This could be simply expressed by allocating enhanced contributions in a biased manner. Specifically, a 0≤ w_L ≤ 1 fraction of the total income is allotted to the central player while the remaining 1-w_L is distributed among the participating neighbors. The w_L=1/G scenario represents the traditional PGG model, where the income is equally distributed among all participants. The w_L=0 limit corresponds to the situation where the central player allocates all income to the neighbors. While this may initially seem irrational, there have been empirical studies indicating the existence of similar practices in certain tribes where partners generally offer a larger share to an associate in an ultimatum game, signaling their honest intentions <cit.>. The other extreme case, w_L=1, denotes that the central player retains all the benefits. Interestingly, even this seemingly greedy scenario can reflect a cooperative intent and represent a form of mutual aid <cit.>. One can contemplate a barn constructed by an entire Amish community, yet later solely utilized by a single farmer. This study aims to explore the potential ramifications when players exhibit a specific w_L value.
The unequal distribution of collective benefits has previously been the subject of extensive investigation <cit.>. For instance, how income is allocated remains a central issue in the ultimatum game <cit.>. For the current study, however, the diverse allocation within a group comprising several participants is of greater relevance. In certain scenarios, the individual portion accrued by a participant can be strongly contingent on their investment capability <cit.>. Additionally, the heterogeneous interaction topology is a critical aspect where income allocation is proportional to an agent's weight (degree) in the graph <cit.>. In more sophisticated model configurations, players possess an extra skill and keep track of their previous round earnings <cit.>. Yet, our current model is straightforward, emphasizing the fundamental element of biased allocation. For example, it can be applied to regular graphs where players have equal-sized neighborhoods, thus participating in an equal number of joint groups. Moreover, we presuppose homogeneous players who behave similarly and apply a pre-established allocation policy in each case. This characteristic could prove to be crucial, as it has been widely observed that a heterogeneous population, wherein players are unequal, could serve as a mechanism that encourages cooperation <cit.>.
Players may differ in their views about their groups, and their approach to strategies can also be distinct. For example, they may show reluctance to alter their existing strategies, a phenomenon explained from various perspectives. This could be a result of a specific cost related to change <cit.>, or it could be interpreted as a form of self-confidence <cit.>. This strategy change inertia or updating passivity has been identified as a separate mechanism that significantly influences the evolutionary process <cit.>. To quantitatively track this effect, we introduce a 0≤ w_R ≤ 1 weight parameter, which determines the likelihood of retaining the original strategy during the elementary dynamical process. At w_R=0, this effect is completely absent, and we revert to the traditional death–birth rule <cit.>. In the opposite extreme, when w_R=1, there is no proper evaluation because all agents adamantly stick to their original strategy, despite the theoretical cooperation success condition equating to the birth-death rule as w_R→ 1 <cit.>. In between these extremes, at w_R=1/G where G denotes the group size, the strategy of the central player and the strategies of the neighbors carry equal weight and we revert to the imitation rule <cit.>.
This work simultaneously considers the aforementioned effects within the framework of PGG, with players situated on a square lattice. It is important to note that the biased allocation, which can also be interpreted as autocratic behavior, and the indifference towards alternative players representing diverse strategies, may stem from a shared trait. If an individual exhibits higher levels of autocracy and retains more public goods when they organize a group, it may also display traits of arrogance, meaning they have a high self-regard and are not prone to learning from others' strategies. Therefore, the weight factors representing these traits can be similar in size. Moreover, all the mentioned details of the proposed model are strategy-neutral, making it unclear whether they support cooperation or not. Specifically, we assume the analytically feasible weak selection limit, where payoff values merely slightly alter the reproductive fitness of competing strategies.
Our main goal is to determine the critical synergy factor for the success of cooperation based on the control parameters and to uncover the consequences of their simultaneous presence. In the next section, we will define our model, and our primary findings will be presented in Section <ref>. Monte Carlo simulations were also conducted to validate and confirm our theoretical results. The comparisons will be presented in Section <ref>. Our primary conclusions are summarized in Section <ref>, where potential implications will also be discussed.
§ MODEL
In the study of spatial population dynamics, the model utilizes an L× L square lattice with periodic boundary conditions. Hence, the total population N=L^2. Each individual, referred to as an agent, inhabits a vertex on the lattice and forms a group of G=k+1 members, comprising of itself and k of its neighbors. Consequently, each agent partakes in 1+k groups, either organized by itself or by its neighbors. The group formed by agent i is represented by Ω_i. Consequently, the collection of agent i's neighbors can be expressed as Ω_i∖{i}. The common choice of group size is G=5 (k=4, von Neumann neighborhood) or G=9 (k=8, Moore neighborhood).
During each elementary Monte Carlo step, a random agent i is selected to update its strategy s_i based on the payoff acquired from participating in the public goods games. Specifically, agent i organizes a public goods game within its group Ω_i. Each participant j∈Ω_i contributes a cost c>0 to the group if cooperating (s_j=1) or contributes nothing if defecting (s_j=0). The combined investments of all participants ∑_j∈Ω_is_j c is amplified by a synergy factor r>1 to generate the public goods, which are then distributed among group members.
Distinct from the conventional public goods game where the goods are evenly distributed, this study extends this notion by allowing the potential for uneven distribution between the organizer and other players. Specifically, the organizer is allotted a portion w_L (0≤ w_L≤ 1), while the remaining players are evenly allocated the remaining proportion 1-w_L; that is, each of the other players receives (1-w_L)/k. Hence, as the organizer, agent i receives a payoff of w_L r∑_j∈Ω_is_j c-s_i c from group Ω_i. Correspondingly, agent i also participates in groups organized by its neighbors g∈Ω_i∖{i}, receiving a payoff in those groups as a standard player. The payoff of agent i is the average over the k+1 groups, calculated by:
π_i=1/k+1{(w_L r∑_j∈Ω_is_j c-s_i c)
+
∑_g∈Ω_i∖{i}(1-w_L/kr∑_j∈Ω_gs_j c-s_i c)
}.
As underscored, Eq. (<ref>) broadens the traditional public goods game by incorporating the self-allocation parameter w_L. At w_L=0, all public goods are allocated to the other players, while at w_L=1, all public goods are allocated to the organizer. At w_L=1/G, the public goods are distributed equally, reducing Eq. (<ref>) to the traditional public goods game scenario.
In alignment with previous studies <cit.>, the payoff π_i is transformed to fitness F_i=exp(δπ_i), where δ→ 0^+ is a weak selection strength limit. Therefore, a strategy with a higher fitness has a marginal advantage to reproduce more frequently. To calculate the strategy updating probability, we also compute the payoff of agent i's neighbors and convert them to fitness in a similar manner. Consequently, the strategy of agent i is replaced by the strategy of an agent j∈Ω_i with probability W(s_i s_j), which is defined by the generalized death–birth rule <cit.>,
W(s_i s_j)
=
(1-w_R)/k· F_j/w_RF_i+(1-w_R)/k·∑_ℓ∈Ω_i∖{i}F_ℓ,
w_R F_j/w_RF_i+(1-w_R)/k·∑_ℓ∈Ω_i∖{i}F_ℓ,
In Eq. (<ref>), ∑_j∈Ω_iW(s_i s_j)=1 is normalized. Eq. (<ref>) extends the traditional death–birth rule <cit.> by introducing a self-learning weight w_R, following a similar logic to self-allocation. The agent i learns the strategy of agent j proportional to the fitness in the group Ω_i, taking self-learning into consideration. The case of j=i implies that agent i does not learn the strategy from others. At w_R=0, Eq. (<ref>) reduces to the traditional death–birth rule, where the fitness of agent i is disregarded. At w_R=1/G, Eq. (<ref>) simplifies to the imitation rule, where the fitness of agent i is compared equally with all neighbors. An elementary Monte Carlo step concludes once the randomly selected agent i in the system updates its strategy. A full Monte Carlo step encompasses N elementary steps, ensuring that the strategy of each agent is updated on average once.
Our model's key parameters are the weight factors, w_L and w_R, which dictate the bias in allocation and the rate of self-learning, respectively. In Fig. <ref>, we unveil the comprehensive parameter plane, highlighting the important weight values. These values have particular implications. When w_L=1, the total earnings from the communal pool are allocated solely to the focal player. Conversely, when w_L=0, every participant benefits from the pool while the focal player gains nothing. The midway scenario of w_L=1/G recaptures the traditional public goods game (PGG) where all group members equally share the proceeds from the common pool. Shifting our attention to the other weight factor, w_R=0 signifies the classic death–birth dynamics, where the new strategy of the focal player is exclusively drawn from the strategies of the neighbors. When w_R=1/G, all strategies present in the group are potential candidates in equal measure, which aligns with the well-established imitation rule. Finally, in the limit where w_R → 1, players tenaciously cling to their current strategies, thereby causing the evolution to stagnate. On the parameter plane, we also demarcate with a dotted line the trajectory where both weight factors are simultaneously altered. This trajectory represents the typical system behavior when both the effects of biased allocation and self-confidence are operative in the extended model with equal weights.
In the ensuing section, we explore and analyze how the critical synergy factor for cooperation success evolves in the presence of these skewed allocations and self-confidence biases.
§ THEORETICAL ANALYSIS
We assume that the evolutionary process begins from a state with the presence of N_C cooperative players. In essence, the initial proportion of cooperation is N_C/N. When the selection strength, denoted as δ, equals zero, the system defaults to the dynamics of the voter model <cit.>. In this state, cooperation will ultimately dominate the entire population with a probability of ρ_C=N_C/N <cit.>. Consequently, under a minimal selection strength of δ→ 0^+, if ρ_C>N_C/N, selection leans towards cooperation, which implies that evolution promotes the success of cooperative behavior. Here, ρ_C can be gauged by the average final proportion of cooperation obtained from independent runs.
Our objective in Section <ref> is to pinpoint the condition that enables the success of cooperation, while Section <ref> focuses on exploring the inherent features of this condition.
§.§ The condition for cooperation success
To discern the requisite condition for cooperation success, we utilize the identity-by-descent (IBD) method <cit.>. Initially, we introduce n-step random walks. Fundamentally, this refers to moving to a random neighbor during each 1-step random walk. The quantity after completing n-step walks is represented as x^(n), where x could be π, F, and s. The x^(n) quantity is indistinguishable among various agents since the square lattice is a vertex-transitive graph, where an agent cannot identify its location by examining the network structure.
Based on the random walks' definition, we can rewrite the payoff calculation in Eq. (<ref>) to obtain an agent's expected payoff from n steps away, as described in Eq. (<ref>),
π^(n) =1/k+1{(w_L r(k s^(n+1)+s^(n))c-s^(n)c)+k(1-w_L/k r(k s^(n+2)+s^(n+1))c-s^(n)c)}
=(w_L/k+1r-1)s^(n)c+1+(k-1)w_L/k+1rs^(n+1)c+k(1-w_L)/k+1 rs^(n+2)c,
which will later be useful for calculation.
To simplify, we assume a single initial cooperative player 1 in our analysis, implying that N_C=1 and evolution favors cooperation if ρ_C>1/N. In this scenario, the condition for cooperation success under weak selection can be rewritten as per the equivalent form <cit.> as shown in Eq. (<ref>),
⟨∂/∂δ(ℬ_1-𝒟_1)⟩_[ δ=0; s_1=1 ]>0,
where ⟨·⟩_[ δ=0; s_1=1 ] represents the expected value under neutral drift (δ=0) and single cooperator (s_1=1). ℬ_1 is the probability of agent 1 passing on its strategy to a neighbor. This occurs when a neighbor i∈Ω_1∖{1} of agent 1 is randomly selected with a 1/N probability to update the strategy and learns agent 1's strategy with a W(s_i s_1) probability. In the same vein, 𝒟_1 is the probability of agent 1's strategy being supplanted by a neighbor. This transpires when agent 1 is randomly selected with a 1/N probability to update its strategy and learns the strategy of a neighbor j∈Ω_1∖{1} with a W(s_1 s_j) probability. By applying Eq. (<ref>) and F_i=exp(δπ_i), we arrive at the equations summarized as follows:
ℬ_1 =∑_i∈Ω_1∖{1}1/NW(s_i s_1)
=∑_i∈Ω_1∖{1}1/N(1-w_R)/k·exp(δπ_1)/w_Rexp(δπ_i)+(1-w_R)/k·∑_ℓ∈Ω_i∖{i}exp(δπ_ℓ),
𝒟_1 =1/N∑_j∈Ω_1∖{1}W(s_1 s_j)
=1/N∑_j∈Ω_1∖{1}(1-w_R)/k·exp(δπ_j)/w_Rexp(δπ_1)+(1-w_R)/k·∑_ℓ∈Ω_1∖{1}exp(δπ_ℓ).
In the further steps, we substitute Eq. (<ref>) and Eq. (<ref>) into Eq. (<ref>) and compute it, as shown in Eq. (<ref>).
⟨∂/∂δ(ℬ_1-𝒟_1)⟩_[ δ=0; s_1=1 ]>0
⇔ 1-w_R/Nk(
k⟨π_1⟩_[ δ=0; s_1=1 ]
-w_R⟨∑_i∈Ω_1∖{1}π_i⟩_[ δ=0; s_1=1 ]
-1-w_R/k⟨∑_i∈Ω_1∖{1}∑_ℓ∈Ω_i∖{i}π_ℓ⟩_[ δ=0; s_1=1 ])
-1-w_R/Nk(
-kw_R ⟨π_1⟩_[ δ=0; s_1=1 ]
+⟨∑_j∈Ω_1∖{1}π_j⟩_[ δ=0; s_1=1 ]
-(1-w_R)⟨∑_ℓ∈Ω_1∖{1}π_ℓ⟩_[ δ=0; s_1=1 ])>0
⇔ ⟨π_1⟩_[ δ=0; s_1=1 ]
-2w_R/k(1+w_R)⟨∑_j∈Ω_1∖{1}π_j⟩_[ δ=0; s_1=1 ]
-1-w_R/k^2(1+w_R)⟨∑_i∈Ω_1∖{1}∑_ℓ∈Ω_i∖{i}π_ℓ⟩_[ δ=0; s_1=1 ]>0
⇔ π^(0)
-2w_R/1+w_Rπ^(1)
-1-w_R/1+w_Rπ^(2)>0.
Following the definition of random walks starting from agent 1, we used Eq. (<ref>) in the last step of Eq. (<ref>).
π^(0)=⟨π_1⟩_[ δ=0; s_1=1 ], π^(1)=1/k⟨∑_j∈Ω_1∖{1}π_j⟩_[ δ=0; s_1=1 ], π^(2)=1/k^2⟨∑_i∈Ω_1∖{1}∑_ℓ∈Ω_i∖{i}π_ℓ⟩_[ δ=0; s_1=1 ].
To transform the strategy quantity s^(n) into walk quantity p^(n), the probability that one returns to the starting vertex after n-step random walks, we use the substitution in Eq. (<ref>), as suggested by Allen and Nowak <cit.>:
s^(n)-s^(n+1)=μ/2(Np^(n)-1)+𝒪(μ^2),
where μ→ 0^+ is an auxiliary parameter, which will be eliminated later, and 𝒪(μ^2)=0. Based on Eq. (<ref>), we can then further develop Eq. (<ref>):
s^(n)
-2w_R/1+w_Rs^(n+1)
-1-w_R/1+w_Rs^(n+2) =(s^(n)-s^(n+1))
+1-w_R/1+w_R(s^(n+1)-s^(n+2))
=μ/2(Np^(n)+1-w_R/1+w_RNp^(n+1)-2/1+w_R)
+𝒪(μ^2).
Utilizing this, we can further calculate the condition for cooperation success as given by Eq. (<ref>). First, we use Eq. (<ref>) to replace the payoff quantity π^(n) with strategy quantity s^(n). Second, we use Eq. (<ref>) to replace the strategy quantity s^(n) with walk quantity p^(n). This logic leads us to Eq. (<ref>):
π^(0)
-2w_R/1+w_Rπ^(1)
-1-w_R/1+w_Rπ^(2)>0
⇔ (w_L/k+1r-1)s^(0)c+1+(k-1)w_L/k+1rs^(1)c+k(1-w_L)/k+1 rs^(2)c
-2w_R/1+w_R{(w_L/k+1r-1)s^(1)c+1+(k-1)w_L/k+1rs^(2)c+k(1-w_L)/k+1 rs^(3)c}
-1-w_R/1+w_R{(w_L/k+1r-1)s^(2)c+1+(k-1)w_L/k+1rs^(3)c+k(1-w_L)/k+1 rs^(4)c}>0
⇔ (w_L/k+1r-1)
(s^(0)-2w_R/1+w_Rs^(1)-1-w_R/1+w_Rs^(2))
+1+(k-1)w_L/k+1r
(s^(1)-2w_R/1+w_Rs^(2)-1-w_R/1+w_Rs^(3))
+k(1-w_L)/k+1 r
(s^(2)-2w_R/1+w_Rs^(3)-1-w_R/1+w_Rs^(4))>0
⇔ (w_L/k+1r-1)
(Np^(0)+1-w_R/1+w_RNp^(1)-2/1+w_R)
+1+(k-1)w_L/k+1r
(Np^(1)+1-w_R/1+w_RNp^(2)-2/1+w_R)
+k(1-w_L)/k+1 r
(Np^(2)+1-w_R/1+w_RNp^(3)-2/1+w_R)>0.
The walk quantity p^(n) can be directly perceived by analyzing the topology of the network structure. One remains in the starting vertex if not walking, so p^(0)=1. A single step cannot encompass leaving and returning to the starting vertex, hence p^(1)=0. On a square lattice, the probability that one returns to the starting vertex after two steps is p^(2)=1/k. Finally, the value of p^(3) varies from case to case. In short, p^(3)=0 for von Neumann neighborhood and p^(3)=3/64 for Moore neighborhood (for more details, refer to Ref. <cit.>).
By applying the previously mentioned values of p^(0)=1, p^(1)=0, and p^(2)=1/k, but retaining p^(3), we can further calculate Eq. (<ref>) to reach the final result as shown in Eq. (<ref>):
π^(0)
-2w_R/1+w_Rπ^(1)
-1-w_R/1+w_Rπ^(2)>0
⇔ (w_L/k+1r-1)
(N-2/1+w_R)
+1+(k-1)w_L/k+1r(1-w_R/1+w_RN/k-2/1+w_R)
+k(1-w_L)/k+1 r(N/k+1-w_R/1+w_RNp^(3)-2/1+w_R)>0
⇔
r>(N-2+N w_R)(G-1)G/N(G-1)^2 (1-w_L)(1-w_R) p^(3)+N(G-2)(w_L-w_L w_R+w_R)+(N+2-2G)G≡ r^⋆.
This provides the condition r>r^⋆ for cooperation success. Notably, the critical synergy factor r^⋆ is only a function of the population N, group size G, higher-order network structure p^(3), self-allocation w_L, and updating inertia w_R.
Table <ref> summarizes the primary outcomes related to the critical synergy factor, r^⋆, along with their corresponding large population limits (N→ +∞), derived from taking specific parameters in Eq. (<ref>). Following the convention in much of the prior literature, we consider the death–birth rule (w_R=0) as the benchmark scenario. In this context, we present the reduced r^⋆ values corresponding to three distinct scenarios: equal allocation (w_L=1/G), allocation to other players (w_L=0), and allocation to the organizer (w_L=1). In addition, we explore a situation where the self-allocation and updating inertia are congruent (w_L=w_R≡ w), leading to consistency in the self-loops of allocation and updating. The trajectories of this case in the w_R-w_L parameter plane are visually represented in Fig. <ref> for an intuitive understanding.
Table <ref> offers additional insights into the main outcomes associated with the critical synergy factor, r^⋆, in relation to specific neighborhood types. We concentrate on two commonly used cases: von Neumann neighborhood and Moore neighborhood. The former, von Neumann neighborhood, lacks triangle motifs, resulting in p^(3)=0. Conversely, the latter, Moore neighborhood, is a rudimentary structure on a two-dimensional lattice that incorporates overlapping neighbors, yielding p^(3)=3/64 <cit.>.
§.§ The conflict between self-allocation and self-confidence
Utilizing the analytical expression of the critical synergy factor r^⋆, we can examine the combined impact of self-allocation w_L and self-confidence w_R on cooperation. From an intuitive perspective, a decrease in the r^⋆ value needed for cooperation success (i.e., r>r^⋆) fosters cooperation.
By referring to Eq. (<ref>), we can confirm that ∂ r^⋆/∂ w_L<0 holds for the specified neighborhood types. This indicates that an increase in self-allocation diminishes r^⋆ and thereby enhances cooperation. Fig. <ref>(a) portrays the critical synergy factor r^⋆ as a function of self-allocation w_L for von Neumann neighborhood under the condition of death–birth updating (w_R=0). Regardless of the population size, directing the public goods towards the organizer invariably stimulates cooperation.
Similarly, we find ∂ r^⋆/∂ w_R>0 for the designated neighborhood types. This suggests that an increase in self-confidence, or alternatively, an increase in updating inertia, acts to obstruct cooperation. This effect aligns with observations made in simpler models by prior studies <cit.>. With the von Neumann neighborhood and w_L=1/G, the critical synergy factor r^⋆ as a function of updating inertia is depicted in Fig. <ref>(b). Across varying population sizes, an increase in updating inertia consistently hampers cooperation.
The aforementioned observations create a fascinating dynamic when both effects coexist. Specifically, the divergent outcomes of biased allocation and self-confidence pose a question: how does the system respond when we enhance the weights of these factors simultaneously? Does it stimulate or inhibit cooperation? To explore this, we set w_L=w_R≡ w and illustrate the critical synergy factor r^⋆ as a function of w in Fig. <ref>(c). The figure reveals that an initial increase in the self-loop of allocation and strategy updating fosters cooperation, but once the weight surpasses a certain level, this effect reverses, ultimately discouraging cooperation. There exists an optimal self-loop weight w_0, which minimizes the r^⋆ value and is thus most beneficial for cooperation. We can derive the analytical expression for this optimal self-loop value by solving ∂ r^⋆/∂ w=0. The solution is given as:
w_0=1/N(
-(N-2)+√(2)√(2(N-1)^2+N(N-G)(G-1)/(G-1)^2 p^(3)-G+2)),
which is a function of population size N, group size G, and the higher-order network structure p^(3). This weight level provides the most favorable condition for the evolution of cooperation.
By setting N→ +∞ in Eq. (<ref>), we obtain the large population limit of w_0 as:
w_0=-1+√(2)√(2+G-1/(G-1)^2 p^(3)-G+2).
To provide a broader perspective on the simultaneous influences of these factors, we introduce a heat map of the critical synergy factor r^⋆ across the complete w_R-w_L parameter plane in Fig. <ref>. The diagonal dotted line within the figure represents the trajectory discussed in Fig. <ref>(c). This plot reveals certain general characteristics regarding the collective impact of self-loop effects. Specifically, the immediate effect of biased payoff allocation on the critical synergy factor is more pronounced when w_R is small, whereas the w_R dependency of r^⋆ is moderate for large w_R values. The inverse is true when considering the w_R dependency of r^⋆, as it changes more dramatically when w_L is low, while the w_R dependency remains moderate for small w_L values.
When maintaining the aforementioned diagonal trajectory, we can identify some general trends regarding the w-dependence. Specifically, we can confirm that the r^⋆ value at w=0 is consistently lower than the one at w=1, that is, .r^⋆|_w=0<.r^⋆|_w=1. Applying w=1 and w=0 in Eq. (<ref>), we find .r^⋆|_w=1=(N-1)G/(N-G) and .r^⋆|_w=0=(N-2)(G-1)G/[N(G-1)^2 p^(3)+(N+2-2G)G], respectively. Given that N(G-1)^2 p^(3)>0 always stands, we deduce .r^⋆|_w=0<(N-2)(G-1)G/[(N+2-2G)G]=[(N-1)G-(G-2)-N]/[N-G-(G-2)]. And since (N-1)G>N-G and -(G-2)<0, it follows that .r^⋆|_w=0<[(N-1)G-N]/(N-G)<(N-1)G/(N-G). Therefore, .r^⋆|_w=0<.r^⋆|_w=1 always holds true. This indicates that, on a larger scale, when both self-loop effects are significant, the outcome is dominated by the impact of self-confidence, which hinders cooperation. This effect is more pronounced in a topology containing triangle motifs, such as the Moore neighborhood where each player forms a G=9-member group with overlapping neighbors. This case is discussed in more detail in Appendix <ref>.
§ NUMERICAL SIMULATION
To validate our theoretical analysis, we performed Monte Carlo simulations. Initially, each agent is randomly assigned either cooperation or defection, such that N_C≈ N/2. Consequently, as outlined at the beginning of Section <ref>, evolution favors cooperation if ρ_C>1/2. To compute the expected cooperation level ρ_C, we permit up to 40,000 full Monte Carlo steps per run (if all agents become either cooperators or defectors, that specific run may be terminated earlier), and record the cooperation proportion at the last step as the result of each run. The expected cooperation level ρ_C is then the average across multiple independent runs. Based on our empirical exploration, for N=25, ρ_C is the average over 1,000,000 runs; for N=400, ρ_C is the average over 10,000 runs; for N=10000, ρ_C is obtained from a single run.
Using the von Neumann neighborhood, Fig. <ref> illustrates the expected cooperation level ρ_C as a function of the synergy factor r at w=0, w=0.3, and w=0.6. In Fig. <ref>(a), where N=25, substituting all parameter values into Eq. (<ref>) gives r^⋆=5.4118, 4.9493, 5.1351 for w=0, 0.3, and 0.6, respectively. Similarly, in Fig. <ref>(b), for N=400, we get r^⋆=4.0612, 4.0280, 4.2992. In Fig. <ref>(c), where N=10000, we obtain r^⋆=4.0024, 3.9835, 4.2571. As can be observed, the cooperation level ρ_C rises with an increase in the synergy factor r, and ρ_C>0.5 when r>r^⋆, thus affirming the theoretical analysis.
§ CONCLUSION
Collaborating on a project does not necessarily equate to equal benefits from the resulting income. For instance, an individual acting as the organizer of a group may allocate a different proportion of public goods to themselves than to other participants. If everyone follows the same protocol, allocating more public goods to the organizer boosts the gains in the game managed by oneself, but simultaneously leads to fewer gains in games organized by neighbors. Consequently, the impact of biased allocation on the level of cooperation is far from a simple question. Prior studies have demonstrated that this seemingly strategy-neutral mechanism actually promotes cooperation by preventing the diffusion of public goods <cit.>.
On the other hand, if an individual allocates more public goods to themselves as an organizer, this attitude might also imply that the individual is more authoritative and confident, and less inclined to change their current strategy. Past observations have revealed that this inertia in strategy updating inhibits cooperation by slowing the aggregation of cooperators <cit.>. Thus, it can be concluded that biased allocation and strategy updating inertia play opposing roles in the evolution of cooperation.
Assuming that the measure of biased allocation and updating inertia are interconnected, this study focuses on their simultaneous presence and explores how they jointly influence cooperation. We derive a theoretical solution on a two-dimensional square lattice and identify the critical synergy factor r^⋆ required for cooperation success. Consequently, cooperators are more likely to dominate when r>r^⋆. Our primary interest lies in how r^⋆ fluctuates on the plane of weight factors, which determine biased allocation and the extent of strategy updating inertia. Upon introducing the self-loop w of allocation and updating, it initially promotes and later, for larger w values, inhibits cooperation. In this scenario, we can identify an optimal self-loop value w_0 that is most conducive to cooperation. In other cases, where the network topology contains triangle motifs, the impact of strategy inertia is more potent, thus increasing the self-loop w tends to hamper cooperation.
Moreover, we theoretically demonstrate that the cooperation threshold at w=0 is always smaller than at w=1. This suggests that the inhibitory effect of self-confidence on cooperation generally outweighs the facilitative effect of self-allocation on cooperation when the allocation and updating self-loop w takes extreme values. These observations propose that although biased allocation may appear as an unfair protocol, its impact on cooperation is decidedly not detrimental. However, the self-confidence driven strategy updating inertia is always harmful, and cannot be offset by the effect of allocation.
§ ACKNOWLEDGEMENT
A.S. was supported by the National Research, Development and Innovation Office (NKFIH) under Grant No. K142948.
§ MOORE NEIGHBORHOOD
Our primary results are summarized in Eq. (<ref>). It proposes that topology slightly influences the critical synergy factor r^⋆ through the parameter G. However, a more complex consequence is embodied in the value of p^(3). This factor creates a stark distinction between the von Neumann and Moore neighborhoods, regardless of using the same vertex-transitive square lattice. For the von Neumann neighborhood, the three-step quantity p^(3)=0, as there is no triangle motif. To explore the consequences of a non-zero p^(3), we examine the Moore neighborhood, the simplest two-dimensional lattice that contains higher-order structure where p^(3)=3/64 <cit.>.
The first two panels of Fig. <ref> confirm that the separate impacts of biased allocation and strategy updating inertia are similar to those observed for the von Neumann neighborhood. However, their combined influence on r^⋆ diverges from the previous observation, as the self-confidence-based inertia is significantly stronger in this context, making the increase of the mutual weight factor w detrimental to the success of cooperation.
This effect is generally valid and becomes evident when we compare the color-coded heat map of the critical synergy factor r^⋆ on the w_R-w_L parameter plane. The main difference between the last panels of Fig. <ref> and Fig. <ref> is the minimal change in the value of r^⋆ as we move horizontally on the parameter plane of Fig. <ref>(c). This suggests that changes in w_L have only a minimal impact on cooperation, because the value of w_R is the determining factor here.
Our final Fig. <ref> presents a comparison of the results from our analytical and numerical calculations. In Fig. <ref>(a), where N=25, substituting all parameter values into Eq. (<ref>) yields r^⋆=10.6154, 10.6087, 11.4000 for w=0, 0.3, and 0.6, respectively. Similarly, in Fig. <ref>(b), for N=400, we obtain r^⋆=6.1546, 6.8158 for w=0, 0.3. In Fig. <ref>(c), where N=10000, we calculate r^⋆=6.0060, 6.6725, 7.5061 for w=0, 0.3, and 0.6. As before, the simulations confirm our theoretical predictions well.
53
natexlab#1#1
[#1],#1
[Nowak(2006)]nowak_s06
authorM. A. Nowak,
titleFive rules for the evolution of cooperation,
journalScience volume314
(year2006) pages1560–1563.
[Perc et al.(2013)Perc, Gómez-Gardeñes, Szolnoki, and
Floría and Y. Moreno]perc_jrsi13
authorM. Perc, authorJ. Gómez-Gardeñes,
authorA. Szolnoki, authorL. M. Floría and Y.
Moreno,
titleEvolutionary dynamics of group interactions on
structured populations: a review,
journalJ. R. Soc. Interface volume10
(year2013) pages20120997.
[Sigmund(2010)]sigmund_10
authorK. Sigmund, titleThe Calculus of Selfishness,
publisherPrinceton University Press, addressPrinceton,
NJ, year2010.
[Wang et al.(2022)Wang, Dai, He, Yu, and Shen]wang_jw_pla22
authorJ. Wang, authorW. Dai, authorJ. He,
authorF. Yu, authorX. Shen,
titlePersistent imitation paves the way for cooperation in
public goods game,
journalPhys. Lett. A volume447
(year2022) pages128302.
[Xiao et al.(2022)Xiao, Zhang, Li, Dai, and Yang]xiao_sl_epjb22
authorS. Xiao, authorL. Zhang, authorH. Li,
authorQ. Dai, authorJ. Yang,
titleEnvironment-driven migration enhances cooperation in
evolutionary public goods games,
journalEur. Phys. J. B volume95
(year2022) pages67.
[Wang and Szolnoki(2022)]wang2022reversed
authorC. Wang, authorA. Szolnoki,
titleA reversed form of public goods game: equivalence and
difference,
journalNew J. Phys. volume24
(year2022) pages123030.
[Hua and Liu(2023)]hua_sj_csf3
authorS. Hua, authorL. Liu,
titleFacilitating the evolution of cooperation through
altruistic punishment with adaptive feedback,
journalChaos, Solit. and Fract. volume173
(year2023) pages113669.
[Szolnoki et al.(2009)Szolnoki, Perc, and Szabó]szolnoki_pre09c
authorA. Szolnoki, authorM. Perc,
authorG. Szabó,
titleTopology-independent impact of noise on cooperation
in spatial public goods games,
journalPhys. Rev. E volume80
(year2009) pages056109.
[Yu et al.(2022)Yu, Wang, and He]yu_fy_csf22
authorF. Yu, authorJ. Wang, authorJ. He,
titleInequal dependence on members stabilizes cooperation
in spatial public goods game,
journalChaos, Solit. and Fract. volume165
(year2022) pages112755.
[Wang et al.(2021)Wang, Pan, Ju, and He]wang2021public
authorC. Wang, authorQ. Pan, authorX. Ju,
authorM. He,
titlePublic goods game with the interdependence of
different cooperative strategies,
journalChaos. Solit. and Fract. volume146
(year2021) pages110871.
[Wang and Huang(2022)]wang2022between
authorC. Wang, authorC. Huang,
titleBetween local and global strategy updating in public
goods game,
journalPhysica A volume606
(year2022) pages128097.
[Wang and Sun(2023a)]wang2023public
authorC. Wang, authorC. Sun,
titlePublic goods game across multilayer populations with
different densities,
journalChaos. Solit. and Fract. volume168
(year2023a) pages113154.
[Wang and Sun(2023b)]wang_cq_c23
authorC. Wang, authorC. Sun,
titleZealous cooperation does not always promote
cooperation in public goods games,
journalChaos volume33
(year2023b) pages063111.
[Xie et al.(2023)Xie, Liu, Wang, and Jiang]xie_k_csf23
authorK. Xie, authorX. Liu, authorH. Wang,
authorY. Jiang,
titleMulti-heterogeneity public goods evolutionary game on
lattice,
journalChaos. Solit. and Fract. volume172
(year2023) pages113562.
[Ding et al.(2023)Ding, Wang, Zhao, Gu, and Wang]ding_r_csf23
authorR. Ding, authorX. Wang,
authorJ. Zhao, authorC. Gu,
authorT. Wang,
titleThe evolution of cooperation in spatial public goods
games under a risk-transfer mechanism,
journalChaos, Solitons and Fractals volume169
(year2023) pages113236.
[Zhang et al.(2010)Zhang, Zhang, Xie, and Wang]zhang_cy_epl10
authorC. Zhang, authorJ. Zhang,
authorG. Xie, authorL. Wang,
titleDiversity of game strategies promotes the evolution
of cooperation in public goods games,
journalEPL volume90 (year2010)
pages68005.
[Henrich et al.(2001)Henrich, Boyd, Bowles, Camerer, Fehr, Gintis, and
McElreath]henrich_aer01
authorJ. Henrich, authorR. Boyd,
authorS. Bowles, authorC. Camerer,
authorE. Fehr, authorH. Gintis,
authorR. McElreath,
titleIn search of homo economicus: behavioral experiments
in 15 small-scale societies,
journalAm. Econ. Rev. volume91
(year2001) pages73–78.
[Nowak et al.(1995)Nowak, May, and Sigmund]nowak_sa95
authorM. A. Nowak, authorR. M. May,
authorK. Sigmund,
titleArithmetics of mutual help,
journalScientific American volume272
(year1995) pages76–81.
[Allen et al.(2013)Allen, Gore, and Nowak]allen2013spatial
authorB. Allen, authorJ. Gore, authorM. A.
Nowak,
titleSpatial dilemmas of diffusible public goods,
journalElife volume2 (year2013)
pagese01169.
[Su et al.(2018)Su, Wang, and Stanley]su2018understanding
authorQ. Su, authorL. Wang, authorH. E.
Stanley,
titleUnderstanding spatial public goods games on
three-layer networks,
journalNew J. Phys. volume20
(year2018) pages103030.
[Zhang et al.(2012)Zhang, Shi, Liu, and Wang]zhang_hf_pa12
authorH. Zhang, authorD. Shi, authorR. Liu,
authorB. Wang,
titleDynamic allocation of investments promotes
cooperation in spatial public goods game,
journalPhysica A volume391
(year2012) pages2617–2622.
[Cong et al.(2016)Cong, Li, Wang, and Zhao]cong_r_epl16
authorR. Cong, authorK. Li, authorL. Wang,
authorQ. Zhao,
titleCooperation induced by wise incentive allocation in
spontaneous institution,
journalEPL volume115 (year2016)
pages38002.
[Szolnoki and Chen(2020)]szolnoki_amc20
authorA. Szolnoki, authorX. Chen,
titleBlocking defector invasion by focusing on the most
successful partner,
journalAppl. Math. Comput. volume385
(year2020) pages125430.
[Wang et al.(2018)Wang, He, and Chen]wang_q_amc18
authorQ. Wang, authorN. He, authorX. Chen,
titleReplicator dynamics for public goods game with
resource allocation in large populations,
journalAppl. Math. Comput. volume328
(year2018) pages162–170.
[Bin and Yue(2023)]bin_l_amc23
authorL. Bin, authorW. Yue,
titleCo-evolution of reputation-based preference selection
and resource allocation with multigame on interdependent networks,
journalAppl. Math. Comput. volume456
(year2023) pages128128.
[Güth et al.(1982)Güth, Schmittberger, and
Schwarze]guth_jebo82
authorW. Güth, authorR. Schmittberger,
authorB. Schwarze,
titleAn experimental analysis of ultimatum bargaining,
journalJ. Econ. Behav. Org. volume3
(year1982) pages367–388.
[Sigmund et al.(2002)Sigmund, Fehr, and Nowak]sigmund_sa02
authorK. Sigmund, authorE. Fehr, authorM. A.
Nowak,
titleThe economics of fair play,
journalSci. Am. volume286
(year2002) pages82–87.
[Szolnoki et al.(2012)Szolnoki, Perc, and Szabó]szolnoki_prl12
authorA. Szolnoki, authorM. Perc,
authorG. Szabó,
titleDefense mechanisms of empathetic players in the
spatial ultimatum game,
journalPhys. Rev. Lett. volume109
(year2012) pages078701.
[Wang et al.(2014)Wang, Chen, and Wang]wang_xf_srep14
authorX. Wang, authorX. Chen,
authorL. Wang,
titleRandom allocation of pies promotes the evolution of
fairness in the ultimatum game,
journalSci. Rep. volume4
(year2014) pages4534.
[Chen et al.(2015)Chen, Wu, Li, Wu, and Wang]chen_w_epl15
authorW. Chen, authorT. Wu, authorZ. Li,
authorN. Wu, authorL. Wang,
titleHeterogenous allocation of chips promotes fairness in
the ultimatum game,
journalEPL volume109 (year2015)
pages68006.
[Szolnoki et al.(2012)Szolnoki, Perc, and Szabó]szolnoki_epl12
authorA. Szolnoki, authorM. Perc,
authorG. Szabó,
titleAccuracy in strategy imitations promotes the
evolution of fairness in the spatial ultimatum game,
journalEPL volume100 (year2012)
pages28005.
[Fan et al.(2017)Fan, Zhang, Luo, and Zhang]fan_rg_pa17
authorR. Fan, authorY. Zhang, authorM. Luo,
authorH. Zhang,
titlePromotion of cooperation induced by heterogeneity of
both investment and payoff allocation in spatial public goods game,
journalPhysica A volume465
(year2017) pages454–463.
[Peng et al.(2010)Peng, Yang, Wang, Chen, and Wang]peng_d_epjb10
authorD. Peng, authorH.-X. Yang, authorW.-X.
Wang, authorG. R. Chen, authorB.-H. Wang,
titlePromotion of cooperation induced by nonuniform payoff
allocation in spatial public goods game,
journalEur. Phys. J. B volume73
(year2010) pages455–459.
[Meloni et al.(2017)Meloni, Xia, and Moreno]meloni_rsos17
authorS. Meloni, authorC.-Y. Xia,
authorY. Moreno,
titleHeterogeneous resource allocation can change social
hierarchy in public goods games,
journalR. Soc. open sci. volume4
(year2017) pages170092.
[Perc and Szolnoki(2008)]perc_pre08
authorM. Perc, authorA. Szolnoki,
titleSocial diversity and promotion of cooperation in the
spatial prisoner's dilemma game,
journalPhys. Rev. E volume77
(year2008) pages011904.
[Santos et al.(2008)Santos, Santos, and Pacheco]santos_n08
authorF. C. Santos, authorM. D. Santos,
authorJ. M. Pacheco,
titleSocial diversity promotes the emergence of
cooperation in public goods games,
journalNature volume454
(year2008) pages213–216.
[Szabó and Hauert(2002)]szabo_prl02
authorG. Szabó, authorC. Hauert,
titlePhase transitions and volunteering in spatial public
goods games,
journalPhys. Rev. Lett. volume89
(year2002) pages118101.
[Li et al.(2016)Li, Szolnoki, Cong, and Wang]li_k_srep16
authorK. Li, authorA. Szolnoki,
authorR. Cong, authorL. Wang,
titleThe coevolution of overconfidence and bluffing in the
resource competition game,
journalSci. Rep. volume6
(year2016) pages21104.
[Szolnoki and Chen(2018)]szolnoki_pre18
authorA. Szolnoki, authorX. Chen,
titleReciprocity-based cooperative phalanx maintained by
overconfident players,
journalPhys. Rev. E volume98
(year2018) pages022309.
[Wang and Szolnoki(2023)]wang2023evolution
authorC. Wang, authorA. Szolnoki,
titleEvolution of cooperation under a generalized
death-birth process,
journalPhys. Rev. E volume107
(year2023) pages024303.
[Szolnoki et al.(2009)Szolnoki, Perc, Szabó, and
Stark]szolnoki_pre09
authorA. Szolnoki, authorM. Perc,
authorG. Szabó, authorH.-U. Stark,
titleImpact of aging on the evolution of cooperation in
the spatial prisoner's dilemma game,
journalPhys. Rev. E volume80
(year2009) pages021901.
[Liu et al.(2010)Liu, Rong, Jia, and Wang]liu_rr_epl10
authorR.-R. Liu, authorZ. Rong, authorC.-X.
Jia, authorB.-H. Wang,
titleEffects of diverse inertia on scale-free-networked
prisoner's dilemma games,
journalEPL volume91 (year2010)
pages20002.
[Zhang et al.(2011)Zhang, Fu, Wu, Xie, and Wang]zhang_yl_pre11
authorY. Zhang, authorF. Fu, authorT. Wu,
authorG. Xie, authorL. Wang,
titleInertia in strategy switching transforms the strategy
evolution,
journalPhys. Rev. E volume84
(year2011) pages066103.
[Wang and Szolnoki(2023)]wang2023inertia
authorC. Wang, authorA. Szolnoki,
titleInertia in spatial public goods games under weak
selection,
journalAppl. Math. Comput. volume449
(year2023) pages127941.
[Wang et al.(2023)Wang, Zhu, and Szolnoki]wang2023conflict
authorC. Wang, authorW. Zhu,
authorA. Szolnoki,
titleThe conflict between self-interaction and updating
passivity in the evolution of cooperation,
journalChaos, Solit. and Fract. volume173
(year2023) pages113667.
[Ohtsuki and Nowak(2006)]ohtsuki_jtb06
authorH. Ohtsuki, authorM. A. Nowak,
titleThe replicator equation on graphs,
journalJ. Theor. Biol. volume243
(year2006) pages86–97.
[Nowak et al.(2004)Nowak, Sasaki, Taylor, and Fudenberg]nowak_n04b
authorM. A. Nowak, authorA. Sasaki,
authorC. Taylor, authorD. Fudenberg,
titleEmergence of cooperation and evolutionary stability
in finite populations,
journalNature volume428
(year2004) pages646–650.
[McAvoy et al.(2020)McAvoy, Allen, and Nowak]mcavoy2020social
authorA. McAvoy, authorB. Allen, authorM. A.
Nowak,
titleSocial goods dilemmas in heterogeneous societies,
journalNat. Human Behav. volume4
(year2020) pages819–831.
[Clifford and Sudbury(1973)]clifford1973model
authorP. Clifford, authorA. Sudbury,
titleA model for spatial conflict,
journalBiometrika volume60
(year1973) pages581–588.
[Cox and Griffeath(1983)]cox1983occupation
authorJ. T. Cox, authorD. Griffeath,
titleOccupation time limit theorems for the voter model,
journalAnnals Prob. (year1983)
pages876–893.
[Cox and Griffeath(1986)]cox1986diffusive
authorJ. T. Cox, authorD. Griffeath,
titleDiffusive clustering in the two dimensional voter
model,
journalAnnals Prob. (year1986)
pages347–370.
[Allen and Nowak(2014)]allen2014games
authorB. Allen, authorM. A. Nowak,
titleGames on graphs,
journalEMS Surv. Math. Sci. volume1
(year2014) pages113–151.
[Nowak et al.(2010)Nowak, Tarnita, and Wilson]nowak2010evolution
authorM. A. Nowak, authorC. E. Tarnita,
authorE. O. Wilson,
titleThe evolution of eusociality,
journalNature volume466
(year2010) pages1057–1062.
|
http://arxiv.org/abs/2307.04399v1 | 20230710080102 | The Topological Quandles up to Four Elements | [
"Mohamed Ayadi"
] | math.CO | [
"math.CO"
] |
The topological quandles up to four elements]The topological quandles up to four elements
empty
Laboratoire de Mathématiques Blaise Pascal,
CNRS–Université Clermont-Auvergne,
3 place Vasarély, CS 60026,
F63178 Aubière, France, and University of Sfax, Faculty of Sciences of Sfax,
LAMHA, route de Soukra,
3038 Sfax, Tunisia.
[email protected]
stdNode/.style=rounded corners, draw, align=right,
greenRed/.style=stdNode, top color=green, bottom color=red,
blueRed/.style=stdNode, top color=blue, bottom color=red
The finite topological quandles can be represented as
n × n matrices, recently defined by S. Nelson and C. Wong.
In this paper, we first study the finite topological quandles and we show how to use these matrices to distinguish all isomorphism classes of finite topological quandles for a given cardinality n. As an application, we classify finite topological quandles with up to 4 elements.
[2020]57K12, 16T05 .
[
Mohamed Ayadi
July 8, 2023
=================
§ INTRODUCTION
A quandle is a set Q with a binary operation : Q × Q ⟶ Q satisfying the three
axioms
* (i) for every a ∈ Q, we have a a = a,
* (ii) for every pair a, b ∈ Q there is a unique c ∈ Q such that a = c b, and
* (iii) for every a, b, c ∈ Q, we have (a b) c = (a c) (b c).
As an example for (G, ∘) a group and : G× G ⟶ G the operation defined by x y=x∘ y∘ x^-1, for all x, y ∈ G, then Q is a quandle.
More on quandles can be found in <cit.>.
A quasi-poset is a pairs (X, ≤), where X is a set and ≤ a quasi-order on X, that is to say a transitive and reflexive relation on X.
Recall (see e.g. <cit.>) that a topology on a finite set X is given by the
family 𝒯 of open subsets of X, subject to the three following axioms:
* ø∈𝒯, X∈𝒯,
* The union of (a finite number of) open subsets is an open subset,
* The intersection of a finite number of open subsets is an open subset.
By Alexandroff’s theorem <cit.>, for any finite set X, there is a bijection between topologies on X and quasi-orders on X.
Any topology 𝒯 on X defines a quasi-order denoted by ≤_𝒯 on X:
x≤_𝒯y⟺ any open subset containing x also contains y.
Conversely, any quasi-order ≤ on X defines a topology 𝒯_≤ given by its upper ideals, i.e., subsets Y⊂ X such that (y∈ Y and y≤ z) z∈ Y. Both operations are inverse to each other:
≤_𝒯_≤= ≤ and 𝒯_≤_𝒯=𝒯.
Hence there is a natural bijection between topologies and quasi-orders on
a finite set X.
Any quasi-order (hence any topology 𝒯 ) on X gives rise to an equivalence relation:
x ∼_𝒯y⟺( x≤_𝒯y and y≤_𝒯x ).
A finite topological space (X, ≤) will be represented by the Hasse diagram of the quotient X/∼, where ∼ is the equivalence relation defined above. Each vertex is drawn as a bubble in which all elements of the same equivalence class are represented by points.
More on finite topological spaces can be found in <cit.>.
Let (Q, ≤) be a topological space equipped with a continuous map μ : Q × Q ⟶ Q , denoted by μ(a, b) = a b, such that for every b∈ Q the mapping a↦ a b is a homeomorphism of (Q,≤). The space Q (together with the map μ ) is called a topological quandle <cit.> if it satisfies for all a, b, c ∈ Q
* (i) (a b) c=(a c) (b c),
* (ii) a a=a.
Let (Q, ) and (Q', ') be two topological quandles.
A continuous map ϕ : Q ⟶ Q' is called a topological quandle homomorphism if ϕ(a b) = ϕ(a) ' ϕ(b), for all a, b ∈ Q.
The paper is organized as follows. We recall in Section <ref> the method of B. Ho and S. Nelson <cit.> to describe finite quandles with up to 5 elements,
and we also recall in section <ref> how S. Nelson and C-Y. Wong in <cit.> prove that the decomposition of a finite quandle into orbits coincides with our notion of decomposition into Q-complemented subquandles.
In section <ref> we prove that, if Q=Q_1 ⨿ Q_2 ⨿···⨿ Q_n is a finite quandle, written in its orbit decomposition, and if 𝒯=(Q, ≤) is a topological space such as 𝒯_|Q_i is the coarse topology on Q_i for all i∈ [n], then 𝒯 is Q-compatible. Then we apply this result to find the finite topological quandles with up to 4 elements.
§ THE MATRIX OF A FINITE QUANDLE
Let Q={x_1, x_2, . . . , x_n} be a finite quandle with n elements. We define the
matrix of Q, denoted M_Q, to be the matrix whose entry in row i column j is x_i x_j:
M_Q= [ x_1 x_1 x_1 x_2 ... x_1 x_n; x_2 x_1 x_2 x_2 ... x_2 x_n; . . ... .; . . ... .; . . ... .; x_n x_1 x_n x_2 ... x_n x_n ]
<cit.>
Let Q={a, b, c }, the Quandle matrices for quandles of order 3 are, up to permutations of Q:
[ a a a; b b b; c c c ]
, [ a c b; c b a; b a c ]
, [ a a a; c b b; b c c ]
Let Q={a, b, c, d }, the Quandle matrices for quandles of order 4 are, up to permutations of Q:
[ a a a a; b b b b; c c c c; d d d d ]
, [ a a a a; b b b c; c c c b; d d d d ]
, [ a a a b; b b b c; c c c a; d d d d ]
, [ a a b b; b b a a; c c c c; d d d d ]
,
[ a a a a; b b d c; c d c b; d c b d ]
, [ a a b b; b b a a; d d c c; c c d d ]
, [ a d b c; c b d a; d a c b; b c a d ]
Let Q be a quandle. A subquandle X ⊂ Q is a subset of Q which is itself a quandle under . Let Q be a quandle and X ⊂ Q a subquandle. We say that X is complemented in Q or Q-complemented if Q\ X is a subquandle of Q.
<cit.>
Let Q be a finite quandle. Then Q may be written as
Q = Q_1 ⨿ Q_2 ⨿···⨿ Q_n,
where every Q_i is Q-complemented and no proper subquandle of any Q_i is Q-complemented. This
decomposition is well-defined up to isomorphism; if Q ≈ Q', then in the decompositions
Q = Q_1 ⨿ Q_2 ⨿···⨿ Q_n, and Q' = Q'_1 ⨿ Q'_2 ⨿···⨿ Q'_m,
we have n = m and (after reordering if necessary), Q_i=Q'_j.
§ REMINDER ON THE ORBIT DECOMPOSITION
Notation. Let (Q, ) be a finite quandle, for x'∈ Q, we note
R_x':Q ⟶ Q
x ⟼ x x',
and
L_x':Q ⟶ Q
x ⟼ x' x.
(Q, 𝒯) is a finite topological quandle if and only if, (R_x' is an homeomorphism and L_x' is a continuous map, for all x'∈ Q) if and only if, for all x, y, x', y' ∈ X, if x≤ x' and y≤ y', we obtain x y≤ x' y'.
Let (Q, ) be a finite quandle, the intersection of two Q-complemented subquandles is also Q-complemented.
Let (Q, ) be a finite quandle and let Q_1, Q_2 be two Q-complemented sub-quandles.
It is clear that the binary operation : (Q_1∩ Q_2) × (Q_1∩ Q_2) ⟶ Q_1∩ Q_2 satisfies the two axioms (i) and (iii) of the definition of quandle.
For x, y∈ Q_1∩ Q_2, it exists z∈ Q such as x=R_y(z), where R_y:Q⟶ Q defined by R_y(z)=z y. i.e., z=R^-1_y(x). Since x, y∈ Q_1∩ Q_2 and the map R_y is a bijection on Q_1 (resp. on Q_2), so we get z∈ Q_1∩ Q_2. Hence : (Q_1∩ Q_2) × (Q_1∩ Q_2) ⟶ Q_1∩ Q_2 satisfies the axiom (ii). So Q_1∩ Q_2 is a sub-quandle.
On the other hand:
Q=(Q_1∩ Q_2)⨿ (Q_1∩Q_2)⨿ (Q_1∩ Q_2)⨿ (Q_1∩Q_2), where Q_1=Q\ Q_1 and Q_2=Q\ Q_2.
Let a∈Q_1∩ Q_2, so we have three possible cases; a∈ Q_1∩Q_2 or a∈Q_1∩ Q_2 or a∈Q_1∩Q_2.
* If a∈Q_1∩Q_2, we obtain
* R_a: Q_1∩Q_2⟼Q_1∩Q_2 is a bijection.
* R_a: Q_1⟼Q_1 is a bijection.
* R_a: Q_2⟼Q_2 is a bijection.
* R_a: Q⟼ Q is a bijection.
Then R_a respects all four blocks.
* If a∈Q_1∩ Q_2 or a∈ Q_1∩Q_2; similarly.
Hence R_a respects Q_1∩ Q_2, so we then deduce that Q_1∩ Q_2 is a Q-complemented subquandles.
Then the finite intersection of Q-complemented subquandles is also Q-complemented.
Notation:
Let (Q, ) be a finite quandle. For a∈ Q, we not
Q_a=⋂_a∈ Q' Q' is Q-complemented Q'
and Ω_a={ b∈ Q, a∼ b },
where ∼ is the transitive closure of the relation ℛ̃ defined by:
xℛ̃y⟺ it exists z∈ Q such as (x=y z or y=x z).
i.e., for all a, b∈ Q,
a∼ b if and only if, it exists c_1,..., c_n∈ Q such as aℛ̃c_1...ℛ̃c_nℛ̃b.
<cit.>
Let (Q, ) be a finite quandle then Ω_a and Q_a defined above are equal for any a∈ Q.
Let (Q, ) be a finite quandle and a∈ Q, according to the Lemma <ref>, Q_a is a Q-complemented subquandle.
- It is clear that the binary operation : Ω_a ×Ω_a ⟶Ω_a satisfies the two axioms (i) and (iii) of the definition of quandle.
Let x, y∈Ω_a, then there exists a unique z ∈ Q such that x=z y, and hence xℛ̃z, hence z∈Ω_x=Ω_a. Hence, the map R_x: Ω_a ⟶Ω_a defined by R_x(y)=y x is a bijection. So Ω_a is a sub-quandle of Q.
Moreover the binary operation : Q\Ω_a × Q \Ω_a ⟶ Q\Ω_a satisfies the two axioms (i) and (iii) of the definition of quandle. And for all x, y∈ Q\Ω_a there exists z∈ Q such that x=z y, hence xℛ̃z, then z∈ Q\Ω_a necessarily, because otherwise then x∈Ω_a, which is absurd. Hence, the map R_x :Q\Ω_a ⟶ Q\Ω_a defined by R_x(y)=y x is a bijection. So Q\Ω_a is a sub quandle of Q. then Ω_a is Q-complemented.
- Since Q_a is the smallest complemented sub-quandle containing a, we obtain that Q_a⊆Ω_a.
- It remains to show that Ω_a ⊆ Q_a, let B be a sub-quandle Q-complemented containing a. For x∈ B, then R_x respects B. Moreover for x∈B, R_x respects B.
So for all x∈ Q, R_x and R^-1_x respect B. And since Ω={P_1...P_ka, with P_j equal to R_x_j or R^-1_x_j, x_j∈ Q }, then Ω_a⊂ B.
Hence Ω_a⊂ Q_a. Consequently Ω_a=Q_a.
§ RESULTS
In this section we prove that, if Q=Q_1 ⨿ Q_2 ⨿···⨿ Q_n is a finite quandle and let 𝒯=(Q, ≤) is a topological space such as for all i∈ [n], 𝒯_|Q_i is the coarse topology on Q_i, then 𝒯 is Q-compatible. From this result I find the topological quandles of 3 and 4 elements.
§.§ The topologies of orbits of finite quandle
Let Q be a finite quandle, then the discrete topology and the coarse topology are Q-compatible.
Let Q=(X, ) be a finite quandle. If 𝒯 is the discrete topology, then for all x, x', y, y' ∈ X, if x≤_𝒯 x' and y≤_𝒯 y', then x=x' and y=y', so x y≤_𝒯x' y', hence 𝒯 is Q-compatible.
If 𝒯 is the coarse topology, then for all x, y ∈ X, x∼_𝒯 y, so for all x, x', y, y'∈ X, x≤_𝒯 x' and y≤_𝒯 y' and x y≤_𝒯x' y'. Hence 𝒯 is a Q-compatible.
Notation. Let Q=Q_1 ⨿ Q_2 ⨿···⨿ Q_n be a finite quandle written in its orbit decomposition (see Theorem <ref>). We denote by 𝒯_Q=𝒯_Q_1···𝒯_Q_n the usual product topology of 𝒯_Q_i, i∈ [n], where 𝒯_Q_i is the coarse topology on Q_i.
Let :X× X⟶ X the operation of the quandle Q defined by
M_Q=[ a a a; c b b; b c c ]. Its orbit decomposition is Q=Q_1⨿ Q_2, where Q_1=[ a ] and Q_2=[ b b; c c ].
In this case
𝒯_Q=0.7(73,47) (183,-169)
1.0Black(194,-154)2(204,-154)2(191,-150)[lb]b(202,-150)[lb]c(200,-152)(15.811,215,575)
(174,-154)2(174,-150)[lb]a
Let Q=Q_1 ⨿ Q_2 ⨿···⨿ Q_n be a finite quandle written in its orbit decomposition, and let 𝒯 be a topology on Q.
If for all i∈ [n], 𝒯_|Q_i is the coarse topology on Q_i, then 𝒯 is Q-compatible.
Let 𝒯 be a topology on Q, such that 𝒯 _|Q_i is the coarse topology, for all i∈ [ n ]. For x∈ Q_i, we note Q_x=Q_i.
Let z, z'∈ Q such that z≤ z', then for all x∈ Q, L_x(z)=x z∈ Q_x and L_x(z' )=x z'∈ Q_x. But 𝒯_Q_i is the coarse topology, then for all a, b∈ Q_i, a∼ b, hence x z ∼ x z'. Hence the continuity of L_x for all x∈ Q is proven. Moreover z≤ z' implies that, for all a∈ Q_z, b∈ Q_z', a≤ b. In particular, R_x(z)=z x∈ Q_z and R_x(z')=z' x∈ Q_z', hence we get R_x(z)≤ R_x(z' ). Hence R_x is continuous for all x∈ Q. As 𝒯 is finite,
we therefore conclude that 𝒯 is Q-compatible.
I use this theorem to find the topological quandles of 3 and 4 elements below.
§.§ List of the topological quandles with three elements
In the three examples above X={a,b,c}.
- Let :X× X⟶ X the operation of the trivial quandle Q defined by
M_Q=[ a a a; b b b; c c c ]. All topologies on X are compatible with this quandle structure.
Indeed: let 𝒯 be a topology on X, for all x,y∈ X, x y=y, then for all x',y'∈ X such that x≤ x' and y≤ y', we obtain x y≤ x' y'.
- Let :X× X⟶ X the quandle structure defined by
M_Q=[ a c b; c b a; b a c ],
let 𝒯=(X, ≤) be a Q-compatible topology, if there exists x y∈{a, b, c} such that x≤ y, then 𝒯 is the coarse topology. In fact, suppose a≤ b we get,
R_a(a)=a≤ R_a(b)=c and R_b(a)=c≤ R_b(b)=b and R_c(a)=b≤ R_c(b)=a, we therefore obtain, a≤ b implies that a≤ c≤ b≤ a, hence 𝒯 is the coarse topology. Similarly if we change a, b by x, y∈{a, b, c}, we find that 𝒯 is the coarse topology.
From Proposition <ref>,we conclude in this case that the topologies on X compatible with the structure are: the discrete topology and the coarse topology.
- Let :X× X⟶ X the quandle structure defined by
M_Q=[ a a a; c b b; b c c ],
then according to Theorem <ref>, the four topologies below endowed with are compatible with the structure of quandle.
0.7(65,49) (172,-166)
1.0Black(195,-143)(21.633,56,416)
(183,-151)2(194,-151)2(204,-152)2(180,-147)[lb]a(191,-147)[lb]b(202,-147)[lb]c
,
0.7(54,61) (183,-169)
1.0Black(194,-142)2(204,-142)2(191,-138)[lb]b(202,-138)[lb]c(200,-138)(15.811,215,575)
(200,-155)(200,-163)
(200,-165)2(204,-168)[lb]a,
0.7(54,67) (183,-149)
1.0Black(194,-136)2(204,-136)2(191,-133)[lb]b(202,-133)[lb]c(200,-132)(15.811,215,575)
(199,-116)(199,-105)
(199,-104)2(196,-100)[lb]a,
0.7(73,47) (183,-169)
1.0Black(194,-154)2(204,-154)2(191,-150)[lb]b(202,-150)[lb]c(200,-152)(15.811,215,575)
(223,-154)2(221,-150)[lb]a
Let 𝒯=(X, ≤) be a Q-compatible topology, then (b∼ c or b and c are incomparable). Indeed: if b≤ c, then R_a(b)=c≤ R_a(c)=b, similarly if c≤ b, then R_a(c)=b≤ R_a (b)=c. So the result.
Let 𝒯=(X, ≤) be a Q-compatible topology such that c and b are incomparable then 𝒯 is the discrete topology. In fact ; if a≤ b then L_b(a)=c≤ b=L_b(b), which is absurd, moreover, if a≤ c then L_c(a)=b≤ c=L_c(c) which is absurd (same if b≤ a or c≤ a). Hence 𝒯 is the discrete topology.
We conclude that the discrete topology
0.7(58,39) (179,-166)
1.0Black(183,-161)2(194,-161)2(204,-162)2(180,-156)[lb]a(191,-156)[lb]b(202,-156)[lb]c and the above four topologies are the only Q-compatible topologies.
§.§ List of the topological quandles with four elements
In the seven examples below X={a, b, c, d}.
- Let :X× X⟶ X the quandle structure defined by
M_Q=[ a d b c; c b d a; d a c b; b c a d ], the only topologies on X compatible with the quandle structure are the discrete topology and the coarse topology.
Indeed: let (Q, 𝒯) be a topological quandle different from the discrete topology, then there exists x y ∈{a, b, c, d}, such that x≤ y.
If a≤ b, then R_a(a)=a≤ c=R_a(b), R_b(a)=d≤ b=R_b(b), R_c(a)=b≤ d=R_c(b), R_d(a)=c≤ a=R_d(b), L_a(a)=a≤ d=L_a(b), L_b(a)=c≤ b=L_b(b), L_c(a)=d≤ a=L_c(b) and L_d(a)=b≤ c=L_d(b). Then, a∼ b∼ c∼ d, i.e., 𝒯 is a coarse topology.
Same if a≤ c, or a≤ d, or b≤ a, or b≤ c, or b≤ d, or c≤ a, or c≤ b, or c≤ d, or d≤ a, or d≤ b, or d≤ c, we prove that 𝒯 is a coarse topology.
- Let :X× X⟶ X the quandle structure defined by
M_Q=[ a a b b; b b a a; d d c c; c c d d ],
If (Q, 𝒯) is a topological quandle, then (a∼ b and c∼ d)
or (a∼ b and c, d are incomparable) or (a, b are incomparable and c∼ d) or (a, b are incomparable and c, d are incomparable).
Indeed, if a≤ b, then R_c(a)=b≤ a=R_c(b). So a∼ b.
Similarly, if b≤ a, then a∼ b. If c≤ d, then c∼ d. If d≤ c, then c∼ d.
By Theorem <ref>, the three topologies below are Q-compatible.
0.7(75,36) (133,-200)
1.0Black(144,-189)(10,180,540)
(171,-189)(10,180,540)
(141,-194)2(147,-194)2(167,-193)2(175,-193)2(137,-190)[lb]a(145,-190)[lb]b(164,-190)[lb]c(173,-190)[lb]d,
0.7(47,62) (133,-174)
1.0Black(144,-163)(10,180,540)
(141,-168)2(147,-168)2(137,-165)[lb]a(145,-165)[lb]b(144,-135)(10.05,174,534)
(145,-153)(144,-145)
(140,-139)2(147,-139)2(137,-136)[lb]c(145,-136)[lb]d,
0.7(50,68) (132,-140)
1.0Black(144,-129)(10.05,174,534)
(140,-133)2(147,-133)2(137,-131)[lb]c(145,-131)[lb]d(143,-96)(10.198,169,529)
(144,-119)(144,-106)
(140,-100)2(147,-100)2(137,-97)[lb]a(145,-97)[lb]b
The disjoint union of the discrete topology on {a, b} and the coarse topology on {c, d} is Q-compatible and vice versa.
Let 𝒯=(X,≤) be a topological space differs from the coarse topology, and suppose there exists x∈{a,b} (resp. x∈{c,d}) and y∈{c,d} (resp. y∈{a,b}) such that x≤ y. Then, 𝒯 is not Q-compatible.
Indeed, if 𝒯 is a Q-compatible wich a≤ c then L_a(a)=a≤ b=L_a(c), which is absurd.
Conclusion: there are seven topologies Q-compatible (the three topologies above and the 4 below).
0.7(65,49) (172,-166)
1.0Black(180,-151)2(188,-151)2(198,-151)2(208,-151)2(178,-147)[lb]a(188,-147)[lb]b(198,-147)[lb]c(208,-147)[lb]d, 0.7(65,49) (172,-166)
1.0Black(195,-143)(21.633,56,416)
(180,-151)2(188,-151)2(198,-151)2(208,-151)2(178,-147)[lb]a(188,-147)[lb]b(198,-147)[lb]c(208,-147)[lb]d, 0.7(75,36) (133,-200)
1.0Black(144,-189)(10,180,540)
(141,-194)2(147,-194)2(167,-193)2(175,-193)2(137,-190)[lb]a(145,-190)[lb]b(164,-190)[lb]c(173,-190)[lb]d, 0.7(75,36) (133,-200)
1.0Black(171,-189)(10,180,540)
(141,-194)2(147,-194)2(167,-193)2(175,-193)2(137,-190)[lb]a(145,-190)[lb]b(164,-190)[lb]c(173,-190)[lb]d
- Let :X× X⟶ X the quandle structure defined by
M_Q=[ a a a a; b b d c; c d c b; d c b d ], then Q=Q_1⨿ Q_2, where Q_1=[ a ] and Q_2=[ b d c; d c b; c b d ]
If (Q, 𝒯) is a topological quandle, then (b∼ c∼ d or b, c, d are incomparable).
If b≤ c, then R_a(b)=b≤ c=R_a(c), R_b(b)=b≤ d=R_b(c), R_c(b)=d≤ c=R_c(c), R_d(b)=c≤ b=R_d(c), L_a(b)=a≤ a=L_a(c), L_b(b)=b≤ d=L_b(c), L_c(b)=d≤ c=L_c(c) and L_d(b)=c≤ b=L_d(c).
So b∼ c, b≤ d and d≤ c, implies that b∼ c and b∼ d, then b∼ c∼ d.
By Theorem <ref>, the three topologies below are Q-compatible.
0.7(61,35) (106,-198)
1.0Black(128,-184)(13.038,122,482)
(122,-189)2(128,-189)2(134,-189)2(109,-189)2(105,-185)[lb]a(118,-185)[lb]b(126,-185)[lb]c(132,-185)[lb]d,
0.7(53,70) (114,-163)
1.0Black(128,-149)(13.038,122,482)
(122,-154)2(128,-154)2(134,-154)2(118,-150)[lb]b(125,-150)[lb]c(131,-150)[lb]d(129,-123)2(129,-136)(129,-125)
(124,-119)[lb]a,
0.7(53,48) (114,-198)
1.0Black(128,-171)(13.038,122,482)
(121,-176)2(128,-176)2(134,-176)2(119,-171)[lb]b(126,-171)[lb]c(132,-172)[lb]d(126,-196)2(126,-195)(127,-184)
(116,-202)[lb]a
Let (Q, 𝒯) be a topological quandle such that, there exists x∈{b, c, d} such that a≤ x or x≤ a then 𝒯 is the coarse topology. Indeed: if a≤ b, then
R_a(a)=a≤ a=R_a(b), R_b(a)=a≤ b=R_b(b), R_c(a)=c≤ d=R_c(b), R_d(a)=a≤ c=R_d(b), L_a(a)=a≤ a=L_a(b), L_b(a)=b≤ b=L_b(b), L_c(a)=c≤ d=L_c(b) and L_d(a)=d≤ c=L_d(b). So a≤ d≤ a and c≤ a≤ c, then c∼ d, then a∼ c and b∼ c∼ d, so a∼ b∼ c∼ d.
Conclusion: there are five Q-compatible topologies: the coarse topology, the discrete topology and the three topologies described above.
- Let :X× X⟶ X the quandle structure defined by
M_Q=[ a a b b; b b a a; c c c c; d d d d ], then Q=Q_1⨿ Q_2⨿ Q_3, where Q_1=[ a a; b b ], Q_2=[ c ] and Q_3=[ d ]
If (Q, 𝒯) is a topological quandle, then (a∼ b or a, b are incomparable).
Indeed, if a≤ b, then R_d(a)=b≤ a=R_d(b), so a∼ b. Same thing if b≤ a then a∼ b.
By Theorem <ref>, any topology that is coarse on the bags {a, b}, {c}, {d} is Q-compatible.
If a, b are incomparable: for all x∈{a, b},
* if x≤ d, then L_a(x)=a≤ b=L_a(d), which is absurd,
* if d≤ x, then L_a(d)=b≤ a=L_a(x), which is absurd,
* if x≤ c, then L_a(x)=a≤ b=L_a(c), which is absurd,
* if c≤ x, then L_a(c)=b≤ a=L_a(x), which is absurd.
Therefore, if (Q, 𝒯) is a topological quandle with a, b are incomparable, it implies that
𝒯=0.7(48,37) (156,-206)
1.0Black(160,-202)2(160,-192)2(166,-202)2(174,-202)2(150,-204)[lb]c(151,-190)[lb]d(167,-200)[lb]a(177,-200)[lb]b(161,-202)(160,-193), or 𝒯=0.7(48,37) (156,-206)
1.0Black(160,-202)2(160,-192)2(166,-202)2(174,-202)2(150,-204)[lb]d(153,-190)[lb]c(167,-200)[lb]a(178,-200)[lb]b(161,-202)(160,-193), or 𝒯=0.7(58,39) (179,-166)
1.0Black(183,-161)2(194,-161)2(204,-161)2(214,-161)2(180,-156)[lb]a(191,-156)[lb]b(202,-156)[lb]c(212,-156)[lb]d, or 𝒯=0.7(73,47) (183,-169)
1.0Black(194,-154)2(204,-154)2(191,-150)[lb]c(202,-150)[lb]d(200,-152)(15.811,215,575)
(164,-154)2(174,-154)2(164,-150)[lb]a(174,-150)[lb]b
It is clear that the above topologies are Q-compatible.
Conclusion: The Q-compatible topologies are the four topologies above and all topologies on X such that a and b are equivalent.
- Let :X× X⟶ X the quandle structure defined by
M_Q=[ a a a b; b b b c; c c c a; d d d d ], then Q=Q_1⨿ Q_2, where Q_1=[ a a a; b b b; c c c ] and Q_2=[ d ].
If (Q, 𝒯) is a topological quandle, then (a∼ b∼ c or a, b, c are incomparable). Indeed:
if a≤ b, then R_d(a)=b≤ c=R_d(b), then R_d(b)=c≤ a=R_d(c). So a≤ b≤ c≤ a, then a∼ b∼ c. Similarly for x, y∈{ a, b, c}, if x≤ y then a∼ b∼ c. Then the result.
(Q, 𝒯) be a topological quandle with a, b, c are incomparable, implies that 𝒯 is the discrete topology. Indeed, if there exists x∈{a, b, c} such that, (x≤ d or d≤ x), then ( L_a(x)=a≤ b=L_a(d) or L_a(d)=b≤ a=L_a(x)), which is absurd.
Conclusion: The Q-compatible topologies are the four topologies below.
0.7(65,49) (172,-166)
1.0Black(195,-143)(21.633,56,416)
(180,-151)2(188,-151)2(198,-151)2(208,-151)2(178,-147)[lb]a(188,-147)[lb]b(198,-147)[lb]c(208,-147)[lb]d,
0.7(58,39) (179,-166)
1.0Black(183,-161)2(194,-161)2(204,-161)2(214,-161)2(180,-156)[lb]a(191,-156)[lb]b(202,-156)[lb]c(212,-156)[lb]d,
0.7(61,35) (106,-198)
1.0Black(128,-184)(13.038,122,482)
(122,-189)2(128,-189)2(134,-189)2(109,-189)2(105,-185)[lb]d(118,-185)[lb]a(125,-185)[lb]b(132,-185)[lb]c,
0.7(53,70) (114,-163)
1.0Black(128,-149)(13.038,122,482)
(122,-154)2(128,-154)2(134,-154)2(118,-150)[lb]a(125,-150)[lb]b(132,-150)[lb]c(129,-123)2(129,-136)(129,-125)
(124,-119)[lb]d,
0.7(53,48) (114,-198)
1.0Black(128,-171)(13.038,122,482)
(121,-176)2(128,-176)2(134,-176)2(119,-171)[lb]a(125,-171)[lb]b(132,-172)[lb]c(126,-196)2(126,-195)(127,-184)
(116,-202)[lb]d
- Let :X× X⟶ X the quandle structure defined by
M_Q=[ a a a a; b b b c; c c c b; d d d d ],
then Q=Q_1⨿ Q_2⨿ Q_3, where Q_1=[ a ], Q_2=[ b b; c c ] and Q_3=[ d ].
If(Q, 𝒯) is a topological quandle, then (b∼ c or b, c are incomparable). Indeed:
if b≤ c, then R_d(b)=c≤ b=R_d(c), then b∼ c.
By Theorem <ref>, the topology of the bags {a} {b, c}, {d} is a Q-compatible.
Let (Q, 𝒯) be a topological quandle, then: b, c are incomparable, implies that for all x∈{a, b, c}, x and d are incomparable. By absurd: if (x≤ d or d≤ x) then (L_b(x)=b≤ c=L_b(d) or L_b(d)=c≤ d=L_b(x)), which is absurd. Moreover if (a≤ b or a≤ c) then (R_d(a)=a≤ c=R_d(b) or R_d(a)=a≤ b=R_d(c)). We deduce therefore that: (Q, 𝒯) be a topological quandle which b, c are incomparable, implies that
𝒯=0.7(56,37) (150,-205)
1.0Black(172,-203)2(160,-203)2(153,-191)2(167,-191)2(160,-203)(154,-192)
(160,-204)(167,-191)
(150,-207)[lb]a(177,-207)[lb]d(147,-189)[lb]b(171,-191)[lb]c or 𝒯=0.7(56,38) (133,-202)
1.0Black(145,-187)2(138,-200)2(153,-200)2(168,-200)2(138,-199)(145,-188)
(153,-200)(145,-187)
(139,-184)[lb]a(128,-202)[lb]c(156,-202)[lb]b(176,-202)[lb]d
Conclusion: The set of Q-compatible topologies are the topologies such that {b} and {c} are equivalent and the first two topologies above, and the discrete topology.
- Let :X× X⟶ X the trivial quandle structure defined by
M_Q=[ a a a a; b b b b; c c c c; d d d d ], all topologies on X are compatible with this quandle structure.
Let Q=Q_1 ⨿ Q_2 ⨿···⨿ Q_n be a finite quandle which contains at most four elements, where the Q_i are the orbits and let 𝒯=(Q, ≤) be a topological space.
We noticed that, if 𝒯 is Q-compatible then for all i∈ [n], T_|Q_i is coarse or discrete topology.
Does this remark remain true for any finite quandles ? This is not the case. Indeed, let :X× X⟶ X be the quandle structure defined by M_Q=[ a a a a a a; b b b b b b; d e c c c c; c f d d d d; f c e e e e; e d f f f f ]
- In the first step, we prove that Q is well defined. It is clear that the operation satisfies the conditions (i) and (ii) of the definition of a quandle, moreover we have R_c=R_d=R_e=R_f=Id and:
R_a(c a)=c=d a=R_a(c) R_a(a) R_a(d a)=d=c a=R_a(d) R_a(a)
R_a(e a)=e=f a=R_a(e) R_a(a) R_a(f a)=f=e a=R_a(f) R_a(a)
R_a(c b)=f=d b=R_a(c) R_a(b) R_a(d b)=e=c b=R_a(d) R_a(b)
R_a(e b)=d=f b=R_a(e) R_a(b) R_a(f b)=c=e b=R_a(f) R_a(f)
and
R_b(c a)=f=e a=R_b(c) R_b(a) R_b(d a)=e=f a=R_ b(d) R_b(a)
R_b(e a)=d=c a=R_b(e) R_b(a) R_b(f a)=c=d a=R_b(f) R_b(a)
R_b(c b)=c=e b=R_b(c) R_b(b) R_b(d b)=d=f b=R_b(d) R_b(b)
R_b(e b)=e=c b=R_b(e) R_b(b) R_b(f b)=f=d b=R_b(f) R_b(f)
then satisfies the condition (iii) of the definition of a quandle. So (Q, ) is a quandle.
- Secondly, if
𝒯=0.7(101,33) (188,-172)
1.0Black(223,-161)(12.817,146,506)
(253,-161)(12.817,146,506)
(190,-167)2(199,-167)2(218,-167)2(226,-167)2(249,-167)2(257,-167)2(187,-164)[lb]a(197,-165)[lb]b(216,-163)[lb]c(224,-163)[lb]d(246,-163)[lb]e(253,-165)[lb]f
we prove that 𝒯 is a Q-compatible. We have R_c=R_d=R_e=R_f=Id and L_a(x)=a and L_b(x)=b for all x∈{ a, b, c, d, e, f }, then it suffices to show that R_a, R_b is an isomorphism and L_c, L_d, L_e, L_f is a continuous maps.
We have c∼ d and e∼ f, we obtain:
R_a(c)=d∼ c=R_a(d), R_b(c)=e∼ d=R_b(d),
R_a(e)=f∼ e=R_a(f), R_b(e)=c∼ d=R_b(f),
then R_a and R_b is an isomorphism.
Moreover
L_c(a)=d, L_c(b)=e, L_d(a)=c, L_d(b)=f, L_e(a)=f,
L_e(b)=c, L_e(b)=c, L_f(a)=e, L_f(b)=d,
and L_c(x)=c, L_d(x)=d, L_e(x)=e and L_f(x)=f, for all x∈{c, d, e, f}.
Then L_x is a continuous maps for all x∈{a, b, c, d, e, f}.
So 𝒯 is Q-compatible.
Acknowledgements: The authors would like to thank Mohamed Elhamdadi for useful suggestions and comments.
Conflicts of interest: none
10 tocchapterBibliography
acg.AlexP. Alexandroff,
Diskrete Räume,
Rec. Math. Moscou, n. Ser. 2 (1937), no. 3, p. 501-519.
Moh. DoublingM. Ayadi, D. Manchon,
Doubling bialgebras of finite topologies,
Letters in Mathematical Physics, vol. 111, p. 1-23, 2021.
Moh. twistedM. Ayadi,
Twisted pre-Lie algebras of finite topological spaces,
Communications in algebra, vol. 50, p. 2115-2138, 2022.
B. Ho and S. NelsonB. Ho and S. Nelson,
Matrices and Finite Quandles,
Homology, Homotopy and Applications, vol. 7, p. 197-208, 2005.
acg10F. Fauvet, L. Foissy, D. Manchon,
The Hopf algebra of finite topologies and mould composition;
Ann. Inst. Fourier, Tome 67, No. 3 (2017), 911–945.
Nelson and WongS. Nelson and C. Wong,
On the orbit decomposition of finite quandles;
Journal of Knot Theory and Its Ramifications, vol. 15, p. 761-772, 2006.
RubinszteinR L. Rubinsztein,
Topological quandles and invariants of links;
Journal of knot theory and its ramifications, vol. 16, p. 789-808, 2007.
L. Pedro and R. DennisL. Pedro and R. Dennis,
On finite racks and quandles,
Communications in Algebra®, vol. 34, p. 371-406, 2006.
acg..12R. E. Stong,
Finite topological spaces,
Trans. Amer. Math. Soc. 123 (1966), 325-340.
acg15A. K. Steiner,
The lattice of topologies: structure and complementation,
Trans. Am. Math. Soc. 122 (1966), 379–398.
acg16R. S. Vaidyanathaswamy,
Set topology,
Chelsea, New-York (1960).
DN. YetterD N. Yetter,
Quandles and monodromy,
Journal of Knot Theory and Its Ramifications, vol. 12, p. 523-541, 2003.
|
http://arxiv.org/abs/2307.04450v1 | 20230710100015 | Toward a generative modeling analysis of CLAS exclusive $2π$ photoproduction | [
"T. Alghamdi",
"Y. Alanazi",
"M. Battaglieri",
"L. Bibrzycki",
"A. V. Golda",
"A. N. Hiller Blin",
"E. L. Isupov",
"Y. Li",
"L. Marsicano",
"W. Melnitchouk",
"V. I. Mokeev",
"G. Montana",
"A. Pilloni",
"N. Sato",
"A. P. Szczepaniak",
"T. Vittorini"
] | hep-ph | [
"hep-ph",
"hep-ex",
"nucl-th"
] |
JLAB-THY-23-3881
[email protected]
AI-supported algorithms, particularly generative models, have been successfully used in a variety of different contexts.
In this work, we demonstrate for the first time that generative adversarial networks (GANs) can be used in high-energy experimental physics to unfold detector effects from multi-particle final states, while preserving correlations between kinematic variables in multidimensional phase space.
We perform a full closure test on two-pion photoproduction pseudodata generated with a realistic model in the kinematics of the Jefferson Lab CLAS experiment.
The overlap of different reaction mechanisms leading to the same final state associated with the CLAS detector's nontrivial effects represents an ideal test case for AI-supported analysis.
Uncertainty quantification performed via bootstrap provides an estimate of the systematic uncertainty associated with the procedure.
The test demonstrates that GANs can reproduce highly correlated multidifferential cross sections even in the presence of detector-induced distortions in the training datasets, and provides a solid basis for applying the framework to real experimental data.
Toward a generative modeling analysis of CLAS exclusive 2π photoproduction
T. Vittorini0009-0002-4390-5670
August 12, 2023
==========================================================================
§ INTRODUCTION
Photoproduction of two pions, with photon energies in the few-GeV range, is an important process in hadron spectroscopy.
It has been widely used to address
several fundamental quests,
such as the `missing baryons' problem, and to demonstrate that
multiparticle final states are necessary to determine the spectrum.
While copious data are available for single-pion photoproduction, and the correspondent phenomenology is well understood, the addition of a third particle in the final state makes the description of this reaction considerably more complicated.
At fixed photon energy, the unpolarized single-pion photoproduction cross section is described by a single independent variable, while for two pions three additional variables are needed.
At beam energies of a few GeV, the highest statistics data sample is available from the Jefferson Lab Hall B CLAS experiment <cit.>.
Even in this case, some bins in the multidimensional space are unpopulated or subject to large statistical fluctuations.
This results in large uncertainties in extracting the underlying reaction mechanisms.
The problem has been addressed by studying one or two variables at a time, while integrating over the others.
During integration, correlations between variables, which in turn contain relevant physics information, are partially lost, making the results
strongly model dependent.
In this context, generative models based on machine learning (ML), which learn the original data distribution and create new so-called synthetic data that mimic the original distribution, can provide new opportunities for extracting the physics information preserving correlations.
Furthermore, these models can provide another way to extract the `true' values from experimental data removing detector effects, with a procedure known as unfolding.
Recently, an event-level unfolding analysis using generative adversarial networks (GANs) in inclusive electroproduction was performed <cit.>.
The analysis was able to reconstruct accurately single-variable cross sections.
Here, we extend our analysis framework to a multiparticle final state,
demonstrating for the first time that GANs can be used to reproduce scattering reactions in a higher dimensional phase space.
Specifically, we optimize our ML analysis framework to the case of two-pion photoproduction at CLAS kinematics.
This study serves as an excellent testing ground for evaluating the effectiveness of the ML analysis framework in a highly nontrivial case.
The presence of baryon and meson resonances with diverse production mechanisms, which overlap within a limited phase space, generate intricate structures and correlations.
Moreover, the CLAS detector's highly non-uniform response introduces additional complexities and distortions, adding another layer of complication to the analysis.
To test and validate the framework, we generate Monte Carlo (MC) pseudodata with a realistic model of two-pion photoproduction.
We produce a synthetic copy with an “unfolding” GAN trained on pseudodata that incorporate detector effects through GEANT simulations <cit.>.
This would be equivalent to train the GAN with experimental data.
The detector effects are unfolded using a “detector-simulation” GAN, independently trained on a second MC pseudodata sample generated according to phase space and passed through the GEANT model of the detector.
We test the quality of the procedure by a quantitative comparison between the generated MC data and its synthetic copy.
This closure test, based on MC pseudodata, is a necessary step before applying our analysis framework to experimental data.
The paper is organized as follows: in Sec. <ref> we review the importance of two-pion photoproduction in hadron spectroscopy and provide a detailed description of the kinematics.
In Sec. <ref> we describe the MC framework used to generate pseudodata and incorporate the CLAS detector response.
In Sec. <ref> we present the ML framework used for reproducing the detector effects and unfold the `true' distributions from the reconstructed pseudodata.
The GAN results are reported in Sec. <ref>, where we compare the generated events with the synthetic copy.
Finally, in Sec. <ref> we summarize the procedure and outline work in progress to extend the current framework to the analysis of real CLAS data from Jefferson Lab.
§ TWO-PION PHOTOPRODUCTION
§.§ The physics case
The ππ N final state is one of the largest contributors to the total photoproduction cross section off protons at center-of-mass (CM) energies W≲ 2.5 GeV.
Studies of this final state have considerably extended the available information on the spectrum of the excited states of the nucleon (N^*) and their photoexcitation amplitudes.
The quantum numbers of these resonances can be assessed by studying the correlations between the invariant mass and the angular dependencies of their decay products.
Theoretical estimates based on phenomenological
approaches <cit.>, continuum Schwinger methods <cit.> as well as from first principles within lattice QCD calculations <cit.>,
have predicted more states than apparently observed in experiments (for reviews, see Refs. <cit.>), which is referred to as the `missing baryons' problem.
A strategy to improve the sensitivity to the most elusive states is to impose consistency constraints by performing combined analyses of several final states at once, with ππ N playing a pivotal role for the resonances heavier than 1.6 GeV.
This allows one to disentangle process-dependent nonresonant contributions, and extract the resonance properties in a nearly model-independent manner <cit.>.
Furthermore, combining photoproduction and electroproduction data has recently proven to be effective in identifying overlapping resonances with the same quantum numbers, as in the case of the N(1720) and N'(1720) states <cit.>.
In the same reaction, by looking at the invariant mass distribution of the ππ pair, one can study meson resonances, such as the ρ or the f_2(1270).
While the properties of these resonances are well known, a detailed understanding of their production mechanisms is still missing.
At low W ≲ 2 GeV one can study how each N^* state contributes to the meson production process.
At higher energies, above the N^* resonance region, the reaction is well described in terms of Regge theory <cit.>.
The two energy regimes are smoothly connected, making it nontrivial to study the intermediate region rigorously.
A formalism to do so has been proposed recently for the production of single π or η mesons <cit.>.
The extension to two-pseudoscalar final states requires having the full multidimensional dependence under control <cit.>.
In particular, a complete understanding of meson production mechanisms in the ππ N final state, where resonances are well known, is necessary before facing the more complicated ηπ N and η^'π N channels, where exotic hadrons are expected to appear <cit.>.
§.§ γ p →π^+ π^-p kinematics
Measurement of the three-body final state in two-pion photoproduction represents a significant challenge to experiment.
Recently a large body of data
on π^+π^-p photoproduction observables has become available from measurements by the CLAS Collaboration, with W ≤ 2.9 GeV <cit.>.
For a given collision energy, the differential cross section for this process depends on five independent variables, which can be chosen to be the invariant masses of the two pions, M_π^+π^-, and the proton-π^- pair, M_pπ^-, and three angles in the CM frame.
Two of the angles are the polar angle θ_π^+, with the z-axis along the photon three-momentum, and the angle α_[π^+ p][π^-p'] between the plane containing the initial target proton p and π^+ three-momenta and the plane containing the π^- and recoiling proton p' three-momenta.
An equivalent choice would replace θ_π^+ with the invariant momentum transferred t_π^+, defined as the difference squared between the photon and π^+ four-momenta.
The fifth variable ϕ is the azimuthal angle of π^- with respect to the plane containing the photon three-momentum and the polarization vector, and is relevant only in experiments with polarized beam or target. For unpolarized data, one can still define ϕ by pointing the polarization vector in an arbitrary direction, resulting in a ϕ-independent cross section.
Other possible choices for variables are M_pπ^+ (invariant mass of the proton-π^+ pair), t_π^- (momentum transferred between photon and π^-), t (momentum transferred between target and recoil protons), or cosθ (cosine of the angle between target and recoil protons in the CM frame).
Multidimensional analyses are becoming standard, albeit computationally difficult, in modern high statistics experiments <cit.>.
However, some specific reactions can suffer from limited statistics.
In particular, the direct extraction of π^+ π^- p photoproduction events at a given W value, on a 5D grid (or 4D, if integrated over the angle ϕ) with a bin size acceptable for physics analyses, is quite challenging.
Even the highest statistics π^+ π^- p photoproduction sample collected with CLAS <cit.> results in a limited number of counts in the 4D cells (typically <10 events per cell).
In Ref. <cit.>, theoretical curves were fitted to the marginal 1D distributions, determined by integrating the acceptance- and efficiency-corrected 5D distribution over the remaining four variables.
This procedure largely washes out the correlations present in the original data, leading to a significant loss of relevant information contained in the joint distribution.
In this paper we aim to overcome this problem with ML techniques.
To illustrate this, in Fig. <ref> we show two examples of 2D distributions and their 1D projections, as measured in CLAS experiment without efficiency corrections <cit.>.
From these distributions one immediately sees the presence of intermediate resonances that appear as enhancements in the invariant mass of the system in which they decay.
For example, the band at M^2_p π^+≃ 1.5 GeV^2 corresponds to the Δ(1232) baryon resonance, which appears as an intermediate unstable state in the reaction γ p →Δ^++π^- → p π^+ π^-.
The band centered at M^2_π^+ π^-≃ 0.6 GeV^2 corresponds instead to the ρ(770) meson resonance, in the reaction γ p → p ρ^0 → p π^+π^-.
The two resonances are clearly visible as bumps in the respective 1D projections.
Looking at 1D projections only, one can easily miss the presence of a resonance if the relevant invariant mass distribution is not explicitly considered.
This is an example of loss of information that is contained in correlations.
Moreover,
because of quantum interference, the production of ρ^0 and Δ^++ are not independent processes, and it is impossible to associate one event exclusively with either process.
This interference appears in the correlations between the invariant masses, and can be partially lost in the 1D projections.
§.§ Two-pion photoproduction with CLAS
The CLAS spectrometer in Hall B at Jefferson Lab was based on a ∼ 1.25 T toroidal magnet which bends charged particles produced in the hadronic interaction along the polar angles θ_lab (the z-axis along the photon beam), while the preserving azimuthal angles ϕ_lab.
The polarity of the field determined if positive/negative charges were bent towards/away from the beam line into the acceptance of the detector.
A system of three layers of multi-wires drift chambers <cit.> provided momentum information with the resolution, σ_p/p, ranging from 0.5 to 1.0%, depending on the kinematics.
Charged hadron identification was obtained by time-of-flight scintillators <cit.>.
Photoproduction experiments were conducted with a bremsstrahlung photon beam produced by the CEBAF continuous electron beam impinging on 8 × 10^-5 radiation lengths thickness gold foil.
A bremsstrahlung tagging system <cit.> with a photon energy resolution of 0.1% was used to measure the photon energy in each recorded event.
The target cell was a 4 cm in diameter and 40 cm long Mylar cylinder, filled with liquid hydrogen at 20.4 K.
The experimental conditions reported in this paper, and simulated in the framework described in Sec. <ref>, correspond to the experiment that ran in CLAS in 2004.
During the experiment, the torus field was such that positive particles were bent away from the beam line.
The detector geometrical acceptance for each positive particle in the relevant kinematic region was about 40% and somewhat less for negative particles (bent towards the beamline and out of the detector acceptance).
The primary electron beam energy was 4.02 GeV, providing a tagged photon beam in the energy range from 0.8 to 3.8 GeV.
For this analysis we focus on the highest energy region, 3.0–3.8 GeV, that was analyzed in Ref. <cit.>.
The exclusive reaction γ p →π^+ π^-p was isolated by detecting the proton and the π^+ in the CLAS spectrometer, while the π^- was reconstructed from detected particle four-momenta using the missing-mass technique.
In this way, the exclusivity of the reaction was ensured, keeping the contamination from the multipion background to a minimum level.
Only events within a fiducial volume were retained in the analysis, in order to avoid the regions at the edge of the detector acceptance.
Cuts were defined on the minimum proton momentum and the hadron minimum and maximum polar angle.
After all the cuts, approximately 40 M events were identified as produced in exclusive two-pion photoproduction, making the dataset the largest statistics sample of this reaction in the above photon energy range.
Details of the analysis can be found in Ref. <cit.>.
§ MC SIMULATION FRAMEWORKS
In this section we describe the simulation frameworks used to perform the closure test.
Pseudodata corresponding to two-pion photoproduction in the kinematics of the experiment were generated using two different MC event generators that produce the four-momenta of the final state particles.
A realistic GEANT simulation was used to reproduce the finite resolution and limited acceptance of the CLAS detector.
Detector effects were assessed with a first MC generator based on a pure phase-space distribution.
To perform the closure test, we deployed a second MC generator based on a realistic physics model.
The use of two different MC generators minimizes the model dependence in the extraction of the original information and mimics a real situation, where the detector effects are estimated with simulations that are similar but not identical to the experimental distributions.
§.§ Two-pion event generators
The two MC generators simulate the interaction of an incoming unpolarized photon beam with a bremsstrahlung spectrum, in the energy range 3.0–3.8 GeV, with a target proton at rest.
With the choice of variables described in Sec. <ref>, the yields are proportional to the differential cross section, and thus to the squared of the production amplitude A summed over polarizations,
^5 σ/M^2_pπ^-M^2_π^+π^-t_π^+α_[π^+ p][π^-p']ϕ
∝[(W^2 - (M_pπ^- + m_π)^2)(W^2 - (M_pπ^- - m_π)^2)]^-1/2
×∑_pol|A (M^2_pπ^+,M^2_π^+π^-,cosθ_π^-,α_[π^+ p][π^-p'])|^2 .
The first MC generator, referred to as phase space or PS-MC, distributes final state events according to the π^+ π^-p phase space.
This corresponds to assuming that the production amplitude is a constant.
This is clearly unrealistic since, as discussed above, two-pion photoproduction has a much more complicated structure.
However, it has the advantage of being well-defined, agnostic to physics models, and distributes events uniformly across the full reaction kinematics.
The 1D-projected PS-MC event distributions are shown in Fig. <ref>, while the 2D distributions are illustrated in Fig. <ref>.
The second MC event generator, which we refer to as realistic or RE-MC, considers the amplitude squared as an incoherent sum of the three dominant intermediate resonances observed,
γ p →(p ρ^0, Δ^++π^- , Δ^0π^+ )→π^+ π^-p,
added to a ∼ 10% constant that mimics the nonresonant two-pion photoproduction contribution.
Each process has been weighted with the corresponding contribution to the total cross section as reported in Ref. <cit.>.
The angular distributions relative to resonance production are parametrized from measured differential cross sections reported in the same database.
The decays ρ→ππ and Δ→ p π are described using the correct spin structure with the decay matrix elements detailed in Ref. <cit.>.
The resulting 1D and 2D projections for events generated by RE-MC are shown in Figs. <ref> and <ref>, respectively.
We note that this model neglects the interference terms between the intermediate resonances.
Despite this, the resulting distribution provides a reasonable description of the experimental data, showing resonance structures in the invariant masses and the correct angular behavior of particles in the final states.
§.§ CLAS detector simulation
The CLAS detector response has been simulated using the standard GEANT Monte Carlo simulation package, GSIM, used by the CLAS Collaboration <cit.>.
It consists of a central steering and control package that calls a number of independent detector geometry and response packages.
A post-processing code (GSIM-Post-Processor or GPP) has been used to fine tune the GSIM output to match the tails of the experimental resolution and other effects, such as the detector's dead channels, not described by the idealized GEANT-based simulation.
The GSIM output has been fed to the same reconstruction code, RECSIS, used to process experimental data.
We will refer to REC or detector-level events to identify the set of pseudodata as processed by the detector simulation, while GEN or vertex-level will identify the `true' events as generated by the MC code.
As reported in Sec. <ref>, the CLAS detector has a nonuniform acceptance, reduced in the azimuthal angle ϕ_lab (around the beam) by the presence of the six coils of the toroidal magnet, and in the polar angle θ_lab (with respect to the beam direction) by the limited area covered by the drift chambers, calorimeter and time-of-flight systems.
A further limitation concerns the minimum accepted momenta of charged hadrons, due to the energy loss in materials crossed along the track and to the effect of the toroidal magnetic field that bends low-momentum particles out of the detector acceptance.
The limited CLAS acceptance results in a reduced yield in REC with respect to GEN events, since not all generated events are reconstructed.
The effect of the CLAS acceptance on the π^+ variables in the laboratory frame is shown in Fig. <ref>.
As any detector, CLAS has finite resolution, which `smears' the measured kinematic variables resulting in a difference between REC and GEN, even when the event is accepted.
The smearing affects the reconstructed three-momenta of any detected particle within the CLAS acceptance with a distortion depending on the three-momentum of the particle.
Figure <ref> shows the resolution on the detected (REC) π^+ momentum and polar angle as a function of the `true' (GEN) momentum, along with the projections in 1D corresponding to the CLAS relative momentum and angular resolution.
Fitting the two curves to a double Gaussian line, we obtained δ p / p ∼ 0.8% and δθ / θ∼ 0.5%. A similar smearing affects the kinematic variables of the detected proton.
The resolution of the CLAS detector is sufficiently high so as to allow the use of the missing mass technique to identify the exclusive two-pion reaction against the multipion background.
The technique uses knowledge of the initial state and of the detected particles to calculate the invariant mass of the undetected system to fulfill energy-momentum conservation, within detector resolution.
If all particles are detected, the missing mass is zero.
If a single particle is undetected, its mass appears as a peak in the missing mass spectrum.
If two or more particles are lost, the missing mass of the system is unconstrained and does not peak, but rather distributes smoothly.
The technique is only applicable if the experimental resolution is sufficient to disentangle the missing mass peak from this multiparticle background.
Clearly, the more particles are detected, the lower is the resolution for the missing mass due the error propagation, limiting the validity of the technique to reactions with a small number of particles in the final state.
When the missing particle has been identified, its four-momentum is determined by energy and momentum conservation, and the final state can be fully reconstructed.
In two-pion photoproduction, the requirement of at most a single undetected particle corresponds to the following topologies (missing particle in parentheses): pπ^+(π^-), pπ^-(π^+), π^+π^-(p) and π^+π^-p (all three detected).
Considering the CLAS acceptance, the yield of different topologies is quite different, with a ratio of (100 :37:30:35) for the respective topologies.
Since the pπ^+(π^-) is by far the dominant contribution to REC data, we focus on this topology, although similar conclusions also hold for the others.
Each topology is in one-to-one correspondence with different areas of the allowed phase space, and a combination of different topologies would therefore extend the kinematic coverage of the measurement, mitigating the effect of the limited detector acceptance.
Figure <ref> shows the missing mass distribution of the pπ^+(π^-) topology.
This exclusive final state is identified by selecting events with missing mass in the peak.
Since these simulations only contain the two-pion final state, no multiparticle background populates the plot.
The equivalent distribution for data shows a significant multipion background <cit.> that populates the positive side of the missing mass spectrum, and is rejected during the analysis to assure the reaction exclusivity.
§ GAN-BASED UNFOLDING METHODOLOGY
GANs, a type of neural networks that have gained significant attention in recent years, are powerful generative models highly effective in generating high-quality, realistic data in various fields <cit.>.
The architecture of a typical GAN involves a generator network that learns to produce data and a discriminator network that learns to differentiate between the generated and reference data.
The two networks are trained alternately in a competitive setting, where the generator tries to produce more realistic data to fool the discriminator, and the discriminator tries to correctly identify the generated data.
This iterative process leads to the generation of data that are progressively more realistic, with the ultimate goal of producing synthetic data that are indistinguishable from the reference data.
GANs have been widely applied in many domains, such as image synthesis <cit.>, text generation <cit.>, music composition <cit.>, and videos <cit.>, and have demonstrated impressive results. In image synthesis, GANs have been used to generate highly realistic images visually indistinguishable from real images, which has numerous practical applications in fields such as gaming, film, and art.
Successfully training GANs can be notoriously challenging, however.
Numerous GAN models experience significant issues, such as mode collapse, non-convergence, model parameter oscillation, destabilization, vanishing gradients, and overfitting, resulting in an unbalanced training of the generator and discriminator <cit.>.
In contrast to typical GAN applications, the success of a GAN-based event generator in nuclear and particle physics depends on its ability to accurately reproduce correlations among the momenta of the particles, which becomes increasingly challenging beyond two dimensions.
Moreover, the multidimensional momentum distributions of events associated with nuclear and high-energy physics reactions, such as the two-pion photoproduction process considered in this work, exhibit highly complex patterns and range over orders of magnitude across the phase space.
The task of developing an appropriate GAN architecture that is able to simultaneously reproduce all the correlations among particle momenta, and accurately reproduce multidimensional histograms, is therefore rather difficult.
Machine learning event generators have gained prominence as efficient fast simulation tools in various scientific fields, including high-energy and nuclear physics <cit.>.
Unlike traditional simulation methods that rely on a theoretical framework for the underlying reaction, machine learning event generators learn from large datasets and use this knowledge to produce new events with high fidelity.
GANs have emerged as powerful tools in the field of fast simulation, where they learn to generate events that closely resemble reference data, capturing the underlying physics processes and their distributions <cit.>.
Furthermore, GANs have been employed to address the challenge of simulating detector effects in fast simulation <cit.>. This application of GANs helps bridge the gap between simulated and reference data, enabling more realistic and precise simulations for experimental analyses.
A comprehensive survey of existing ML-based event generators can be found in Ref. <cit.>.
In this study, we employ the architectural framework of the Least Squares GAN, which involves substituting the cross entropy loss function in the discriminator component of a conventional GAN with a least square term.
For further details, see Ref. <cit.>.
In the following, we describe the GAN architecture used to generate the synthetic data that reproduce the γ p →π^+ π^-p RE-MC pseudodata.
As mentioned above, two different GANs were developed and combined.
The detector simulation GAN (DS-GAN) was trained on PS-MC pseudodata to learn the detector effects, and was later inserted between the generator and the discriminator of the unfolding GAN (UNF-GAN) to unfold the GEN vertex-level information from REC pseudodata.
§.§ Detector simulation GAN (DS-GAN)
In order to capture the detector effects, we have developed an ML-based detector simulation using a conditional GAN <cit.>, as illustrated in Fig. <ref>.
Our approach involves training a conditional GAN generator to simulate the detector's smearing effect so that it generates synthetic REC detector-level events from input noise and PS-MC GEN events.
The GEN PS-MC accepted events are passed through the GEANT chain to obtain REC pseudodata.
As proposed by Bellagente et al. <cit.>, both the synthetic REC and REC pseudodata are “concatenated” with original GEN events and fed to the GAN discriminator as input to facilitate convergence.
After successful training, the DS-GAN generator serves as the ML detector surrogate that will be integrated into the UNF-GAN architecture.
Summarizing the model architecture of the DS-GAN, the generator, conditioned on accepted events (GEN), takes in as input a 100-dimensional array of random values with a mean of 0 and a standard deviation of 1.
The generator network consists of five hidden layers, each with 128 neurons, using a leaky rectified linear unit (ReLU) activation function. The final hidden layer is connected to a four-neuron output layer, which uses a linear function to represent the generated features.
At the end of the training, the DS-GAN generator learns how to convert the GEN accepted events into REC events, effectively mimicking the smearing due to the detector as described by GEANT.
The discriminator is made of a neural network with five hidden dense layers. The first three layers have 256 neurons each, while the fourth has 128 neurons and the fifth has 32 neurons. A leaky ReLU activation function is used for all the layers. To prevent overfitting during training, a 5% dropout rate is implemented for each hidden layer.
The last hidden layer is fully connected to a single-neuron output, activated by a linear function, where “1” indicates a true event and “0” is a fake event.
The DS-GAN was trained using about 1M two-pion event samples for 80K adversarial epochs, with an epoch defined as one pass through the training dataset.
Both the generator and discriminator were trained using the Adam optimizer <cit.> with a learning rate of 10^-5 and exponential decay rates for the moment estimates (β1 = 0.5, and β2 = 0.9).
§.§ Unfolding GAN (UNF-GAN)
The training process for the UNF-GAN is illustrated in Fig. <ref>, which depicts the variation of a typical GAN model structure consisting of a conditional generator and a discriminator.
The generator takes as input the photon energy generated by the RE-MC, along with a 100-dimensional white noise vector centered at zero with a unit standard deviation.
This combination of inputs allows the generator, implemented as a deep neural network, to transform the noise and photon energy into a minimal set of event features/variables that effectively describe the two-pion photoproduction reaction.
To strike a balance between execution time and convergence, the generator network is designed with 7 hidden dense layers.
The number of neurons in each layer follows the sequence: 16, 32, 64, 128, 256, 512, and 1024, all of which are activated by the ReLU function.
The last hidden layer is fully connected to a 4-neuron output layer, activated by a linear function.
This output layer represents the independent variables M^2_π^+π^-, M^2_pπ^-, t_π^+, and α_[π^+ p][π^-p'] that are specifically chosen to describe the reaction.
The synthetic GEN event features, generated by the conditional GAN generator, are then fed into the DS-GAN to incorporate the detector effects, and then compared to REC pseudodata obtained by passing the GEN RE-MC pseudodata through GEANT.
The training process involved utilizing approximately 400k two-pion event samples for a duration of around 200k adversarial epochs per UNF-GAN model.
Consistent configuration parameters for the Adam optimizer were maintained, utilizing the same settings as employed for the DS-GAN.
During the training, the generator and the discriminator engage in an adversarial competition, with both updating their parameters throughout the process. Eventually, the generator is able to generate synthetic REC samples that are indistinguishable from the REC pseudodata samples.
This means that the discriminator's ability to correctly classify whether a sample is genuine or synthetic approximates random chance.
§.§ Uncertainty quantification
As neural networks become increasingly employed in physics analysis, it becomes crucial to accurately assess the reliability of ML predictions.
The statistics of the synthetic samples can be made arbitrarily high, so that there is no need to consider a statistical uncertainty.
However, it is important to quantify the systematic uncertainty related to the training procedure, and for this a bootstrap resampling technique was employed.
For the DS-GAN, the procedure involved training a total of 20 neural networks independently from the beginning.
Each one was trained on a different random sample set drawn from the original dataset with replacement, resulting in datasets of the same size but with potentially different observations.
For the UNF-GAN a similar procedure was adopted, with 20 different networks trained independently using the same bootstrap resampling technique.
Moreover, each of the 20 UNF-GANs used a different DS-GAN of the 20 discussed above.
In this way, the systematic uncertainties associated with the DS- and UNF-GANs
are effectively combined.
While it is possible that using a higher number of bootstraps could potentially lead to more precise uncertainty estimates, we found that training 20 GANs provided reasonably stable and consistent results.
It is important to note that the specific number of bootstraps can vary depending on the characteristics of the problem, available data, and desired level of uncertainty quantification.
In this particular case, 20 bootstraps were deemed sufficient for accurately capturing and quantifying the uncertainties associated with the observables.
Furthermore, changing the network architecture was not essential because the convergence we achieved, along with the estimated error and uncertainty quantification, clearly indicate that this architecture is capable of accurately reproducing the data without introducing further systematic uncertainties.
§ RESULTS
In this section we now discuss the DS-GAN and UNF-GAN performance, comparing synthetic to the REC and GEN pseudodata.
We use the nomenclature REC_SYN and GEN_SYN to indicate synthetic data at the detector and vertex levels, respectively.
To visualize the comparison, we build marginal 1D and 2D histograms for some kinematic variables.
To show that correlations are correctly accounted for, we also study the distribution of one variable in some slices of the other variables.
Synthetic data are generated with the
bootstrap procedure detailed in Sec. <ref>, so that the standard deviation σ_SYN corresponds to the systematic uncertainty.
In all our results, the average μ_SYN is shown as a solid line, together with an error band of width ± 1σ_SYN, while pseudodata are represented by dots with their statistical uncertainty σ_pseudodata.
To quantify the level of agreement between the synthetic data and pseudodata, we plot the pull for each bin, defined as
pull = μ_SYN-μ_pseudodata/√(σ^2_SYN+σ^2_pseudodata),
where μ_pseudodata denotes the mean of the pseudodata.
§.§ DS-GAN
The DS-GAN is trained on four independent variables: the invariant masses M_pπ^-^2 and M_π^+π^-^2, t_π^+, and the angle α_[π^+ p][π^-p'].
The comparison between REC_SYN and pseudodata PS-MC REC distributions is shown Fig. <ref>.
In Fig. <ref> the comparison is extended to other physics-relevant distributions not used in the training and derived from the four above-mentioned variables, namely M_pπ^+^2, t_π^-, t, and cosθ.
The agreement, quantified by the pull distributions shown at the bottom of each plot, is remarkable, in both cases, with most of the points lying within 1σ.
This indicates that the DS-GAN is indeed able to learn the CLAS detector effects.
Bidimensional distributions from MC and synthetic data are shown in Fig. <ref>.
The π^+ absolute momentum resolution as obtained from pseudodata (REC-GEN) is shown in Fig. <ref>, along with synthetic data (REC_SYN-GEN).
The two distributions are in very good agreement, indicating that synthetic data incorporate the correct resolution of the detector.
Similar results hold for other kinematic variables of all particles.
These comparisons demonstrate the ability of the DS-GAN to learn and reproduce detector effects in a multidimensional space, even in the tails of the distributions.
This confirms that generative models can indeed be used as an efficient and fast proxy for more computational expensive GEANT simulations <cit.>.
§.§ UNF-GAN
As described in Sec <ref>, the final step in the closure test is to use REC RE-MC pseudodata to train the UNF-GAN, extract the GEN_SYN distributions, and compare them with GEN pseudodata.
Figure <ref> shows the comparison between GEN and GEN_SYN for the four training variables.
We can see a very good agreement between pseudo- and synthetic data at the vertex level, despite the fact that the UNF-GAN was trained on detector-level pseudodata.
This clearly demonstrates the success of the unfolding procedure.
Moreover, the vast majority of pulls lie within ± 1σ, indicating that the uncertainty quantification is appropriate.
The key point of this closure test is to demonstrate that synthetic data maintain the correlations of the original pseudodata.
We checked that this is indeed the case: in Fig. <ref> we display an example of 2D distributions featuring strong correlations.
We give a quantitative determination of the success of the procedure by calculating the pulls, shown in Fig. <ref>, which turn out to be normally distributed, as expected.
The good agreement and preservation of correlations remains valid for derived kinematic variables that were not used for training.
Examples are shown in Fig. <ref> for invariant and CM variables, and in Fig. <ref> for variables in the lab frame.
It is worth noting that in the lab frame the GEN pseudodata exhibits sharp features due to detector acceptance.
These features cannot be properly captured by the GANs, which is trained on invariant variables. Even so, this results in a ≲ 2σ local discrepancy in the 1D projections.
If better agreement is needed, lab frame variables can be added to the training set.
Finally, in Fig. <ref> we compare 1D distributions in a given bin of the other variables.
The success of this test shows that correlations underlying the multidifferential cross section are correctly reproduced in the synthetic datasets.
§ CONCLUSIONS AND OUTLOOK
One of the central results of this paper is the demonstration that a generative adversarial network can be used to reproduce a realistic multibody physics reaction.
As a case study, we have used two-pion photoproduction in the kinematics of the Jefferson Lab CLAS experiment.
This process represents an ideal test case, where several baryon and meson production mechanisms overlap, resulting in rich and complex observable distributions.
The nonuniformity of the CLAS detector response further adds complication to the challenge.
In order to validate the framework, we have performed a closure test to demonstrate that synthetic data correctly reproduce the multidifferential cross section preserving correlations between kinematic variables.
Detector effects were also correctly unfolded by the procedure.
We deployed two MC event generators, one distributed according to pure phase space, and the other incorporating a realistic physics model.
Generated pseudodata were fed into a GEANT-based detector model to realistically take into account the detector response.
Phase-space pseudodata were used to train a GAN-based proxy to learn the detector effects, and realistic pseudodata were then used to train the unfolding GAN and generate synthetic copies of MC events.
The uncertainty quantification of the entire procedure was assessed by combining a bootstrap for the two NNs.
Comparison between the true and GAN-generated samples demonstrated that, within the quoted systematic error, the NN is able to reproduce training and derived kinematic variables, as well as to unfold the detector effects in multiple dimensions.
This work represents a first step towards a full AI-supported analysis of CLAS exclusive two-pion photoproduction data.
It demonstrates that the same analysis framework, trained on CLAS data, can provide a synthetic copy of the experimental data, preserving correlations between kinematic variables and unfolding the detector effects.
Physics interpretation in term of production mechanisms, separating different contributions and extracting resonance parameters from the unfolded data, will follow.
An extension of this framework to include the different topologies and extrapolating in a controlled (albeit model-dependent) outside detector acceptance is also in progress.
We thank J. Qiu for helpful discussions.
This work was supported by the Jefferson Lab LDRD project No. LDRD19-13 and No. LDRD20-18, and in part by the U.S. Department of Energy contract DE-AC05-06OR23177, under which Jefferson Science Associates, LLC, manages and operates Jefferson Lab. ANHB is supported by the DFG through the Research Unit FOR 2926 (project number 409651613). TA was supported by a Ph.D. scholarship from Al-Baha University, Saudi Arabia. The work of NS was supported by the DOE, Office of Science, Office of Nuclear Physics in the Early Career Program. This work contributes to the aims of the U.S. Department of Energy ExoHad Topical Collaboration, contract DE-SC0023598.
apsrev4-1
|
http://arxiv.org/abs/2307.05946v3 | 20230712062331 | A Bayesian approach to quantifying uncertainties and improving generalizability in traffic prediction models | [
"Agnimitra Sengupta",
"Sudeepta Mondal",
"Adway Das",
"S. Ilgin Guler"
] | cs.LG | [
"cs.LG",
"stat.ML"
] |
Exploring the Effectiveness of LLMs in Automated Logging Generation: An Empirical Study
Yichen Li12,
Yintong Huo12,
Zhihan Jiang2,
Renyi Zhong2,
Pinjia He3^**,
Yuxin Su4, and
Michael R. Lyu2
2Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China.
Email: {ycli21, ythuo, zhjiang22, ryzhong22, lyu}@cse.cuhk.edu.hk
3The Chinese University of Hong Kong, Shenzhen, China.
Email: [email protected]
4Sun Yat-sen University, Guangzhou, China.
Email: [email protected]
August 12, 2023
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Deep-learning models for traffic data prediction can have superior performance in modeling complex functions using a multi-layer architecture. However, a major drawback of these approaches is that most of these approaches do not offer forecasts with uncertainty estimates, which are essential for traffic operations and control. Without uncertainty estimates, it is difficult to place any level of trust to the model predictions, and operational strategies relying on overconfident predictions can lead to worsening traffic conditions. In this study, we propose a Bayesian recurrent neural network framework for uncertainty quantification in traffic prediction with higher generalizability by introducing spectral normalization to its hidden layers. In our paper, we have shown that normalization alters the training process of deep neural networks by controlling the model's complexity and reducing the risk of overfitting to the training data. This, in turn, helps improve the generalization performance of the model on out-of-distribution datasets. Results demonstrate that spectral normalization improves uncertainty estimates and significantly outperforms both the layer normalization and model without normalization in single-step prediction horizons. This improved performance can be attributed to the ability of spectral normalization to better localize the feature space of the data under perturbations. Our findings are especially relevant to traffic management applications, where predicting traffic conditions across multiple locations is the goal, but the availability of training data from multiple locations is limited. Spectral normalization, therefore, provides a more generalizable approach that can effectively capture the underlying patterns in traffic data without requiring location-specific models.
§ INTRODUCTION
Efficient traffic management including traffic control and congestion mitigation rely on accurate short-term traffic predictions into the future (i.e., ranging from 5 min to 1 hr). Different traffic control strategies, such as ramp metering or detour suggestions, are reliant on precise traffic forecasting in the short and near-short future. Precise forecasting across free-flow and congested traffic states is often challenging due to the uncertain and chaotic nature of transportation systems. To address the challenges of short-term traffic prediction, various modeling approaches have been proposed in the literature, including classical statistical models and machine learning (ML)-based methods. These models aim to capture the complex and dynamic nature of traffic flow, which arises due to the interaction of multiple factors such as traveler behavior, road networks, traffic incidents, and weather conditions <cit.>. Several statistical parametric techniques including historical average algorithms, smoothing techniques, autoregressive integrated moving average (ARIMA) <cit.> were proposed to model the temporal fluctuations in traffic.
However, parametric methods are often limited in their performances due the specific assumptions on the functional relationship. As a result, non-parametric approaches that do not specify a specific functional form are a promising alternative to model traffic patterns with greater transferability and robustness across datasets <cit.>.
For example, methods like nearest neighbors <cit.>, support vector machine <cit.>, and Bayesian network <cit.> have been used in short-term traffic forecasting.
Deep learning (DL) approaches have demonstrated superior performance in predicting future traffic conditions compared to parametric and other non-parametric approaches. Neural networks (NN) are capable of approximating a complex relationship using a series of non-linear transformations of the input data. Therefore, researchers have considered using NN to model temporal patterns in traffic data <cit.>.
In particular, recurrent neural network (RNN) <cit.> and its variants like long short-term memory (LSTM) are exclusively designed to handle temporal data by accounting for the temporal correlation in data. For example, <cit.> used RNN architecture for travel time prediction, whereas, LSTM based architectures have been used for short-term travel speed prediction <cit.> and predicting traffic under extreme conditions <cit.>.
Despite the success of DL in traffic forecasting, most models suffer from a significant drawback – the models output deterministic predictions of the short-term traffic variable (e.g., flow, density or speed) and their performance are assessed based on the prediction error <cit.>. These models do not include uncertainty estimates for the prediction, which are crucial for comprehending the model's limitations and for further application in an active traffic management system. Uncertainty estimation is a crucial aspect of traffic management, as it enables decision-makers to make informed predictions by providing a measure of confidence in model outputs. Traffic data is inherently noisy and complex, making it challenging to make accurate predictions with point estimates alone. Failure to account for uncertainty can lead to sub-optimal traffic management decisions that do not consider the range of possible outcomes, potentially leading to increased congestion, delays, or safety risks.
Prediction uncertainty can be either aleatoric, which arises from the intrinsic randomness in the data generation process, or epistemic, which is uncertainty in the model specification <cit.>. Aleatoric uncertainty may arise due to sensor malfunction and general randomness in traffic demand, and hence is irreducible with higher data volumes. To the contrary, epistemic uncertainty can be reduced by collecting more data and developing more complex models allowing the exploration of under-represented regions in the training data space.
Limited research on uncertainty quantification have been performed for traffic prediction. For example, stochastic model based on a seasonal autoregressive integrated moving average (SARIMA) model combined with a generalized autoregressive conditional heteroscedasticity (GARCH) model have been utilized to generate traffic flow forecasts and prediction intervals <cit.>. Further, for real-time ITS applications, adaptive Kalman filters with time-varying process variances were used to handle the stochastic traffic time series models <cit.>.
Prediction intervals for bus travel time or freeway travel time have been developed using a simple 2-layer NN with Bayesian and delta methods to address the complexity associated with the underlying traffic processes and the uncertainty associated with the data used to infer travel time <cit.>.
DL models with deterministic outputs that are trained using stochastic gradient descent can efficiently converge to a local minima with a high probability depending on the chosen initial points, where all local minima attain similar loss values <cit.>. However, to account for model variance, uncertainty quantification in DL models using variational inference explores model specifications in the neighborhood of one such local minimum. This is achieved by fitting probability distributions on model parameters, rather than using deterministic weights.
For instance, <cit.> used an encoder-decoder architecture for feature learning, followed by a LSTM prediction module with Bayesian inference to estimate model uncertainty, arising due to model specifications in the neighborhood of one local minimum.
To the contrary, deep-ensembling techniques that combine inferences from multiple local minima (instead of one in variational inference) have also been used for uncertainty quantification in traffic studies, highlighting some advantages over variational inference. For example, <cit.> used a Bayesian hyperparameter optimization to select a set of high-performing configurations and fit a generative model to capture the joint distributions of the hyperparameters. Then, using samples of hyperparameters generated from that distribution, an ensemble of models were trained to estimate model uncertainty. Conversely, <cit.> combined variational inference and deep ensembling techniques by using Monte Carlo dropout and adaptive weight averaging to find multiple local minima (and, model parameters), which are suitably combined for estimating uncertainty.
Although these studies presented different Bayesian approaches to modeling uncertainty, models were primarily trained to predict on data with same or similar distributions, and hence lack generalizability. This is significant since, the model's performance on dataset with different distributions, referred to as out-of-distribution dataset, can vary significantly among these local minima.
This study focuses on examining the generalizability of models in terms of performance under data perturbation. Ideally, for a model that converges to a flat local minima, the loss function remains relatively unchanged even when subjected to data perturbations. This characteristic suggests a desirable level of generalizability <cit.>.
Past research have used adversarial training <cit.> to achieve insensitivity to training data perturbation, which is not always sufficient for achieving insensitivity to test data perturbation. The goal of this study is to develop an uncertainty quantification model for traffic flow prediction that can generalize well on datasets with different distributions.
Specifically, we use dropout and a normalization technique, namely spectral normalization to establish a Bayesian recurrent neural network that has better generalizability.
The novel contributions of this paper are:
* introducing spectral normalization to the dropout-based uncertainty quantification framework to enhance the generalizability of uncertainty quantification models
* conducting a systematic analysis to examine the impact of normalization on model training and understanding how it improves the model's generalizability.
The structure of this paper is organized as follows. First, we provide a background on the DL model and Bayesian approaches used for uncertainty estimation, as well as the normalization techniques employed in this study. We then describe the data and present a comparison of the performance of our proposed models with a baseline model across various prediction tasks. Finally, based on the results, we provide concluding remarks.
§ BACKGROUND
In this section, we provide an overview of the DL forecasting model i.e., long short-term memory (LSTM) and the uncertainty quantification method used in this paper.
§.§ Long short-term memory
Feed-forward neural network architectures are not explicitly designed to handle sequential data. A class of DL approaches, recurrent neural network (RNN), uses a feedback mechanism where the output from a previous time step is fed as an input to the current time step such that information from the past can propagate into future states. This feedback mechanism preserves the temporal correlation and makes it suitable to capture the temporal evolution of traffic parameters. However, RNNs are incapable of handling the long-term dependencies in temporal data due to the vanishing gradient problem <cit.>. Long short-term memory (LSTM) <cit.>, a type of RNN, consists of memory cells in its hidden layers and several gating mechanisms, which control information flow within a cell state (or, memory) to selectively preserve long-term information.
The objective is to update the cell, C_t, over time using the input x_t and the previous time step's hidden state, h_t-1. This process involves several key operations. First, a forget gate, f_t, selectively filters information from the past. Then, an input gate, i_t, regulates the amount of information from the candidate memory cell, C_t, that should be incorporated into the current cell state, C_t. Finally, an output gate, o_t, governs the update of the hidden state, h_t. See Figure <ref>. The computations are represented as follows:
C_t = tanh(W_c[h_t-1,x_t] + b_c)
C_t = f_t ⊙ C_t-1 +i_t⊙C_t
h_t = o_t⊙tanh(C_t)
The outputs from the forget gate, f_t, input gate, i_t, and output gate, o_t are computed as shown below:
f_t = σ(W_f[h_t-1,x_t] + b_f)
i_t = σ(W_i[h_t-1,x_t] + b_i)
o_t = σ(W_o[h_t-1,x_t] + b_o)
Here, σ and tanh represent non-linear activation functions, while W_f, W_i, W_o, and W_c denote weight matrices corresponding to the forget gate, input gate, output gate, and candidate memory cell, respectively. Similarly, b_f, b_i, b_o, and b_c represent the corresponding bias vectors.
§.§ Uncertainty quantification
Bayesian DL captures epistemic or aleatoric uncertainty by fitting probability distributions over the model parameters or model outputs, respectively.
Epistemic uncertainty is modeled by placing a prior distribution over a model’s weights, and then calculating the posterior distribution of the weights given the training data, i.e. P(w|𝒟). During inference, the predictive probability of ŷ for a test data x̂ is computed by P(ŷ|x̂) = 𝔼_P(w|𝒟) P(ŷ|x̂,w). However, solving this is intractable. Popular methods use variational inference <cit.> to obtain the posterior distribution, however these models are computationally expensive. One such method is `Backpropagation by Bayes' <cit.> which approximates the posterior distribution P(w|𝒟) with a simple distribution q(w|θ) parameterized by θ by minimizing their Kullback- Leibler (KL) divergence:
θ^* = max_θ𝐊𝐋[q(w|θ)||P(w|𝒟)]
In this study, we use dropout – a widely used regularization technique to perform the Bayesian inference by estimating the posterior distribution of weights of a NN <cit.>. Dropout is effective in preventing overfitting by randomly deactivating a fraction of neurons during each training iteration. This encourages the NN to become more robust and less dependent on specific neurons, thus mitigating overfitting to the training data.
To achieve Bayesian inference, we incorporate dropout at each layer of the NN during training. Additionally, we apply dropout during inference on the test data, enabling us to sample from the approximate posterior distribution. This approach, often referred to as Monte Carlo dropout, allows us to estimate uncertainty and obtain a more comprehensive understanding of model performance.
In this approach, q(𝐰|θ) is approximated as a mixture of two Gaussians with small variances and the mean of one of the Gaussians is fixed at zero. In a regression setting, the epistemic uncertainty is captured by the predictive variance of the model outputs.
To the contrary, the aleatoric uncertainty is estimated by fitting a Gaussian distribution over the model output to capture the noise in data.
The model is trained to predict the mean, ŷ and variance, σ^2. This results in the objective function being a likelihood minimization as given below.
ℒ(θ) = 1D∑_i12σ̂_i^-2||y-ŷ||^2 + 12logσ̂_̂î^̂2̂
where D is the cardinality of data, y and ŷ are the true and predicted values respectively, and σ^2 is the prediction variance. For numerical stability, the model is trained to predict the natural logarithm of variance i.e., s_i:=logσ̂_i^2. Consequently, the objective function becomes,
ℒ(θ) = 1D∑_i12exp(-s_i)||y-ŷ||^2 + 12 s_i
The total predictive uncertainty for T stochastic forward passes of the model can be approximated as shown in Equation <ref>, where at each pass, a fixed fraction of neurons in each layer are randomly dropped.
Var(y) ≈1T∑_t^T ŷ^2 - ( 1T∑_t^T ŷ)^2_Epistemic
+ 1T∑_t^T σ̂^2_Aleatoric
§ METHODOLOGY
In this section, we will define the model architectures used in our study, as well as the normalization schemes and transfer learning approaches that were used.
§.§ Model training
A stacked LSTM model with Bayesian inference by dropout is used for traffic flow prediction and uncertainty quantification in this study. Our model consists of three LSTM layers with 20, 20 and 10 units, followed by four dense layers with 10, 10, 6 and 2 units respectively, with LeakyReLU activation function <cit.> for dense layers respectively.
In each layer, a fraction of hidden units are stochastically dropped during model training and inference by adding dropout. For each model, dropout rates of 2, 5 and 10 percent are used to evaluate its sensitivity on model accuracy.
Specifically for the LSTM, we use recurrent dropout <cit.> which applies dropout to the cell update vector C_t (see Equation <ref>) instead of dropping gate inputs <cit.> or cell state <cit.>, which adversely affects its long-term memory.
C_t = f_t ⊙ C_t-1 +i_t⊙ d(C_t)
where d(·) refers to the dropout mask.
The models are trained and validated on 60% and 15% of the dataset, while the remaining 25% is reserved for evaluating model performance. During the training process, we minimize the loss function (shown in Equation <ref>) for 100 epochs and choose the model with the lowest validation error. We use Adadelta <cit.> as the optimizer with the learning rate of 0.10, ρ value of 0.95, and epsilon of 1e-7 to train the models. Once the models are trained, the stochastic forward pass is repeated for a finite number of runs (50 in our case) to obtain uncertainty estimates.
§.§ Normalization techniques
Normalization methods usually improve the process of NN training, particularly ensuring smoother gradients, faster training, and better generalization accuracy.
The gradients of the weights in any layer are highly correlated to the outputs from the previous layer. Thus, changes in the output of one layer causes significant changes in summed inputs to the following layer. Batch normalization <cit.>, a technique that controls the mean and variance of inputs across mini-batches, has been shown to improve training efficiency for NNs. However, applying batch normalization to LSTM is more challenging, due the presence of a temporal dimension, where the hidden state of the network is dependent on the previous time steps. Applying batch normalization to the hidden state could result in the loss of temporal dependencies, which can negatively impact the model's performance.
Alternatively, we investigate the use of spectral normalization as an alternative approach to enhance the generalizability of models. We also compare its performance with another normalization technique called layer normalization. A brief overview of both normalization techniques are presented below:
§.§.§ Layer normalization
Layer normalization <cit.> controls the mean and variance of the summed inputs to each layer of NN. For an input x to the NN, the outputs x^l = (x_1^l, x_2^l, ⋯, x_H^l) from layer l is computed by:
x^l = f^l(w^lx^l-1 + b^l)
for l = 1, 2, ⋯, L where f(·) is an activation function, w and b represent the layer-specific weight matrix and bias vector, respectively. Here, H represents the number of hidden units in layer l. Layer normalization re-centers and re-scales the vector x^l using its mean and standard deviation, as shown below, before feeding to subsequent layers.
μ^l = 1H∑_i=1^H x_i^l and σ^l = √(1H∑_i=1^H (x_i^l - μ^l)^2 + ϵ)
LN_γβ≡γ( x^l - μ^l σ^l) + β
where γ and β are tunable scale and shift parameters, and ϵ is added for numerical stability.
§.§.§ Spectral normalization
To stabilize NNs to data perturbation, spectral normalization <cit.> normalizes the weights of the NN.
For a NN, let
f_Θ:x → W_Θx+ b_Θ be an affine transformation and ξ be a perturbation vector with small l_2 norm, the response of the network to perturbation can be expressed as:
f_Θ(x+ξ) - f_Θ(x)_2ξ_2 = W_Θξ_2ξ_2≤σ(W_Θ)
where σ(W_Θ) is the spectral norm of the weight matrix W_Θ, defined as its maximum singular value,
σ(W_Θ)= max_ξ≠ 0W_Θξ_2ξ_2
Further, decomposing the weight matrix W_Θ into layer specific weights and activation functions, it can be shown that,
σ(W_Θ) ≤∏_l=1^L σ(w^l)
Therefore, by constraining the spectral norm of each weight matrix, w^l, model sensitivity to perturbation can be reduced.
In contrast to spectral norm regularization <cit.>, which penalizes the spectral norm by adding an explicit regularization term to the loss function, the layer weights are simply divided by their corresponding spectral norm in our adopted approach.
§.§ Transfer learning
Transfer learning involves 'transferring' knowledge from a source domain to perform a similar task in another domain <cit.>. Transfer learning strategy eliminates the requirement to evaluate a new problem from scratch, rather the network is trained using data from the source domain and suitably adapted to new settings. This significantly reduces the learning time and amount of transfer-domain data required for training. Usually the discriminating layer in the NN is retrained using a small amount of data from the transfer domain, if available. In our case, we retrain the dense layers of the NN, while preserving the LSTM layers, to adapt the model to perform traffic prediction at the transfer location.
§ DATA
The data used in this study for model training and evaluation is obtained from the California Department of Transportation's Performance Measurement System (PeMS), which is popularly used for traffic data modeling. Traffic data (flow, occupancy, speed) are recorded using vehicle detector sensors (VDS) along the freeways and ramps. Raw data sampled at 30 second intervals are aggregated every 5 minutes.
Flow data for one year (i.e., 104942 samples) from VDS 1114805 on California Interstate-05 NB in District 11 were used for training and testing of the models. The performance of the models are evaluated on 25% of the dataset that was never used in training. However, since both training and test data share the same distribution characteristics, evaluating model performance only on the test split of the dataset only ensures model robustness within the bounds of the data distribution. The generalizability of the model is therefore additionally evaluated on a set of out-of-distribution datasets.
To identify the out-of-distribution datasets, different detector locations within District 11 of California were compared. The similarity of traffic flow at these other locations were evaluated using two techniques: KL divergence and correlation analysis. KL divergence was used to compare the probability distributions of traffic flow between different stations, while correlation analysis was used to measure the strength and direction of the linear relationship between traffic data from multiple stations. The KL divergence of daily traffic flows from multiple detector locations was computed with respect to the training station (i.e., VDS 1114805) over a period of 30 days. Multiple days of data were used to account for the stochasticity in daily traffic patterns. We then ranked the stations based on the median KL divergence and selected two stations: (1) one with the highest KL divergence, indicating the most distinct traffic pattern and (2) another with a KL divergence in between the training station and that of the station with highest KL divergence. A similar analysis was conducted using the correlation as a metric and resulted in the selection of the same two locations. The similarity scores for the training and out-of-distribution datasets are presented in Figure <ref>. The traffic flow patterns at these three locations over a period of 2 days is shown in Figure <ref> to illustrate the existence of different patterns of traffic.
§ RESULTS
In this study, three DL models – 1) regular Bayesian LSTM without normalization, 2) Bayesian LSTM with layer normalization, and 3) Bayesian LSTM with spectral normalization are considered for uncertainty quantification in traffic flow prediction.
Stochastic dropout is used for all models to estimate the prediction mean and total uncertainty (given by total standard deviation, see Equation <ref>) through finite number of model runs. To gain a deeper understanding of the impact of normalization, the total uncertainty is further decomposed into aleatoric and epistemic uncertainties.
The performance evaluation of the models includes assessing the accuracy of predictions and the level of uncertainty. We evaluate the model prediction performances by comparing the prediction mean with the corresponding true flow values using three metrics: root mean squared error (RMSE), mean absolute percentage error (MAPE) and R^2 as defined below.
RMSE=√(1n∑_i=1^N [y_i-ŷ_̂î]^2)
MAPE=1N∑_i=1^N|y_i-ŷ_̂îy_i|
R^2=1-∑_i=1^N [y_i-ŷ_̂î]^2∑_i=1^N [y_i-y̅_̅i̅]^2
where y_i represents the 'ground truth' or true value of the observation i, ŷ_̂î is the predicted value of y_i for i=1, 2,… T.
§.§ Performance on training dataset
First, we present the evolution of training and validation losses for the models during the training process corresponding to a dropout rate of 2% in Figure <ref>.
For the regular dropout model without any normalization, we observed that both the training and validation losses initially decreased as the model was trained, indicating that the model was effectively learning from the training data and generalizing well to unseen data i.e., the validation split. However, after a certain number of epochs the validation loss starts to increase. Therefore, the model was not generalizing well to the validation data and was instead memorizing the training data - a phenomenon referred to as `overfitting'. This indicates that the model is becoming too complex and may be fitting the model to noise in the training data.
It is also important to analyze the rate at which the validation loss increases because a faster rate of increase indicates that the model is overfitting more quickly, which means that the model's generalizability to unseen data may be severely limited. Bayesian LSTM with layer-normalization still shows some signs of overfitting, however, this happens at a much lower rate compared to the case when no normalization were used. By controlling the mean and standard deviations of inputs to different layers of NN, the layer-normalization allowed the model to focus more on important features and patterns in the data, rather than fitting noise in the training data. Hence, the model's performance on unseen data can be improved with layer-normalization by reducing overfitting, while still maintaining a high level of performance on the training data.
In contrast, when applying the spectral normalization method, both the training and validation losses exhibited decreasing trends. This indicates that the model was not overfitting and suggests that the normalization technique effectively reduced the complexity of the model
By controlling the spectral norms of the weight matrices in the NN, this method encouraged the model to learn simpler and more generalized representations of the data. This, in turn, can lead to improved generalization performance on new data.
However, due to over-regularization, both the training and validation losses for the spectral normalized model were higher compared to the regular model and the layer-normalized model.
Specifically, the validation losses for the regular, layer-normalized, and spectral normalized models were -1.3382, -1.3990, and 1.3122 for 2% dropout. Additionally, the experiments were performed for dropout rates of 5% and 10%, however the results are only presented corresponding to 2% dropout for brevity. The results and conclusions look similar for the other dropout rates, however, the prediction accuracy decreases and overfitting increases for larger dropout rates.
In the context of evaluating the prediction performance of each model on the test data, the mean prediction was calculated. The results are presented in Table <ref>, which includes the RMSE, R^2, and MAPE metrics. Specifically, the regular model exhibited the lowest RMSE, however the layer- and spectral normalization did not increase the RMSE much. The general predictions are fairly accurate – the mean prediction (red curve) accurately follows the real traffic flows (black curve), see Figure <ref>. Further, the RMSE and R^2 values follow similar trends as the validation loss obtained for each model. This behavior can be attributed to the fact that the validation loss, as defined in Equation <ref>, also takes into account the squared deviations between the predicted and true values.
Next, the uncertainty estimates are compared. Figure <ref> shows the 95th percentile confidence intervals of the estimates of the aleatoric and epistemic uncertainties using grey and red bands in each subplot. As depicted, there are significant differences in the uncertainty estimates among the three models.
To better understand how the overall aleatoric and epistemic uncertainty of each model compares, Figure <ref> plots the distribution of the magnitude of the uncertainties estimated over the test data. In these box-plots, the colored boxes represent the boundaries for the 25th and 75th percentile of the standard deviations, with the notch inside the box representing the median. The whiskers extend from the box by 1.5 times the inter-quartile range and flier points are those past the end of the whiskers.
The figures illustrate that, although the aleatoric uncertainty estimates in layer normalized model exhibit a high degree of dispersion, the median standard deviation is lower than the regular model. Similarly, the layer normalized model outperforms the regular model in terms of epistemic uncertainty. Nevertheless, in both cases, the spectral normalization approach effectively controls the standard deviation.
The variation in the magnitude of the estimated aleatoric and epistemic uncertainties over time are shown in Figure <ref>.
The aleatoric uncertainty estimates follow the general trend of the data for all models, with lower uncertainty observed during periods characterized by lower flow rates and and greater uncertainty observed during periods of higher flows, such as the peak congestion. This observation can be attributed to the chaotic and unpredictable behavior of the system during congested states, which are commonly associated with higher flows.
However, minor differences are observed for regular and layer normalized models, where they show a slight increase in aleatoric uncertainty during low flow regions (0 to 6 hr and 24 to 30 hr) and morning peak congestion period ( 8 hr), compared to other flow regimes. This may be attributed to the limited knowledge of the models about the low flow and peak congestion regimes, as the training data only consisted of approximately 20% and 6% of data belonging to these regimes.
This effect is particularly evident in the epistemic uncertainty, where we observe a more pronounced higher standard deviation specifically for low flow regimes in these two models. In contrast, the spectral normalized model was able to capture trends despite the limited representation of low flow data in the training dataset. This suggests that the spectral normalization approach may be more effective at capturing and generalizing to unseen datasets, compared to the other models.
However, for regimes with higher data representation, the layer normalized model performs better than spectral norm model - therefore provides lower epistemic uncertainty estimates.
§.§ Performance on out-of-distribution datasets
Here, we aim to investigate the impact of normalization on the generalizability of the models by evaluating their performance, both in terms of accuracy and uncertainty, on out-of-distribution datasets, namely VDS 1114211 and VDS 1118170. Initially, we assess the performance of the trained models on these datasets without any additional training, providing insights into how well the models can generalize to data that differs from the training distribution. Subsequently, we proceed to retrain the models using partial data from these out-of-distribution datasets. Finally, we re-evaluate the models' performance to determine the impact of retraining on their ability to handle these datasets.
The prediction accuracy of the models in transfer learning without and with retraining on out-of-distribution datasets is shown in Table <ref> corresponding to dropout rate of 2%.
We observe that the performance of models at a new station is not the same as that at the training station. This is expected since the models are being tested on samples that were not encountered during training. We observe the regular model to exhibit the highest R^2 and least RMSE value compared to both layer and spectral-norm models. Similar trends hold true for other dropout values, and hence omitted for brevity. However, it was observed that as the dropout rate increases, models exhibits higher regularization, resulting in a decrease in prediction accuracy.
Surprisingly, even when faced with significantly different temporal patterns in these out-of-distribution datasets, we observe acceptable levels of
prediction performance – RMSE values are not observed to increase.
Interestingly, the model's performance is contingent upon the underlying data distribution, rather than the temporal pattern itself. For instance, encountering previously unseen values can lead to errors or uncertainties in the model's predictions. To further comprehend this phenomenon, we can refer to Figure <ref>, which illustrates a comparison between the distribution of the normalized flow values from out-of-distribution datasets and the training data.
A sample of model predictions over a 48 hour period on one of the out-of-distribution datasets, namely VDS 1118170 without retraining is shown in Figure <ref>.
It can be seen that the models' predictions in low flow regimes tend to be characterized by errors and uncertainty. This is likely due to the data distribution of VDS 1118170 exhibiting a high skewness with an abundance of low flow values that were not encountered during the model's training phase.
Interestingly, the spectral normalized model provides a constant prediction for the low flow regimes, unlike the regular or layer normalized models, see Figure <ref>. The spectral normalized model treats extreme low flows as perturbations to the low flow values observed during training and therefore restricts its predictions to a constant value. However, it is worth noting that all models performs remarkably well in other flow regimes, resulting in low RMSE values within the overlapping regions of the distributions.
Note that from a traffic prediction perspective, errors in the low flow regime are less critical since congestion management strategies are targeted for high flow regimes.
In contrast, the distribution of VDS 1114211 closely resembles the training distribution, suggesting that the model has encountered similar flow conditions during training, resulting in accurate predictions and low RMSE values.
When the dense layers (and layer normalization parameters for the layer normalized model) of the models are retrained using 20% of the data from each of these locations, the RMSE values tend to decrease in majority of cases. However, it is important to note that the MAPE often increases in few cases. The reason behind this behavior lies in the objective of transfer learning, which aims to minimize the loss function described in Equation <ref>, primarily based on square error loss, and the sensitivity of MAPE to errors in low flow regions, which might be affected during the retraining process.
Normalization is also observed to have a significant impact on uncertainty estimation. The uncertainty plots presented in Figures <ref> and <ref> demonstrate the aleatoric and epistemic uncertainty at the two test locations, both before and after retraining. Consistently with the trends observed in the training data, we find that normalization reduces both aleatoric and epistemic uncertainty in these plots. Notably, spectral normalization exhibits a greater reduction in uncertainty compared to layer normalization. This advantage becomes more pronounced as the dissimilarity between the out-of-distribution datasets increases. Despite the marginally higher RMSE obtained with spectral normalization, one might still prefer to utilize it due to its ability to provide more confident predictions. The reduced uncertainty associated with spectral normalization enhances the model's reliability and provides a higher level of confidence in its predictions, which can be crucial in certain applications.
Furthermore, it is important to consider that aleatoric uncertainty estimates obtained on these datasets without retraining might not accurately reflect their true representation. This is because the estimated data variance (σ^2) is learned solely from the training data and remains unchanged. However, after retraining, the models exhibit a more precise estimation of data uncertainty as they adapt to the characteristics of the new datasets.
Notably, for the spectral normalized model, the standard deviation increased for both stations, accounting for changes in flow patterns between the training and out-of-distribution datasets. On the other hand, the other models did not demonstrate significant changes in the aleatoric uncertainty estimates. Regarding epistemic uncertainty, there were no notable changes observed across the models. This observation suggests that the regular and layer normalized models tend to be overconfident in their predictions, possibly due to their inability to generalize well to out-of-distribution datasets. In contrast, the spectral normalized model exhibits a more realistic estimation of uncertainty, indicating its ability to provide reliable and generalizable predictions.
§.§ Model Interpretation
To gain a deeper understanding of the model performances, traffic time series were labeled into four regimes: 1) low flow (0 to 6 hr), 2) increasing flow (6 to 8 hr), 3) high flow (8 - 18 hr) or congestion and 4) decreasing flow (18 to 24 hr) which are appropriately color-coded as shown below. By comparing the gradients and feature space outputs of each model across these flow regimes, we aim to discern their specific characteristics and differences.
Figure <ref> shows the color-maps of gradients normalized across time for the three models corresponding to different input vectors over a period of 24 hours (i.e. 288 samples in intervals of 5 minutes). Note that the x-axis represents the feature index, which is the historical flow data from the previous time steps with the latest flow data appearing at the right-hand-side end, whereas the y-axis corresponds to different times of the day. The model gradients are compared to identify patterns that could lead to higher generalizability of the spectral normalized models.
The regular models exhibit higher gradients, indicating increased sensitivity of the model outputs to historical traffic flows, particularly during the low flow regime (0 to 6 hr) and the increasing flow regime (6 to 8 hr). The layer normalized models demonstrate slightly lower gradients compared to the regular models, while the spectral normalized models exhibit the lowest gradients.
The increased sensitivity to specific data patterns in regular and layer normalized models leads to larger variations in predictions and, consequently, wider uncertainty bands. On the other hand, the presence of added regularization in spectral normalization allows the model to focus more on generic trends rather than specific data characteristics. As a result, the spectral normalized model demonstrates relatively consistent gradient characteristics throughout the day, showing less sensitivity to different inputs across traffic regimes.
Further, feature-space representations of the outputs from the penultimate layers of the models were visually compared in Figure <ref> to understand their relation with the gradients explored before. Based on the model configuration, the six dimensional feature outputs were suitably reduced to two dimensions using a non-linear dimension reduction technique, t-Stochastic Neighbor Embedding <cit.>.
We have observed that the outputs from the regular dropout model exhibit high dispersion in feature space for data within the same traffic states, indicating sensitivity to specific data and consequently, limited generalizability.
Particularly, we find that the feature outputs demonstrate significant dispersion in space for low- and increasing flow states, which aligns with the higher gradients observed in these regimes.
In Table <ref>, we compare the variances of outputs belonging to specific traffic regimes in the 6-dimensional feature space for each model.
Similar trends are observed for the layer and spectral normalized models, although these model outputs show localized behavior and less sensitivity. A notable improvement in feature space localization is observed for the spectral normalized model. This enhanced localization of traffic data from different flow regimes in the feature space has important implications for the model's learning process and contributes to the stability and improved generalizability of the spectral normalized model. This characteristic allows the model to handle data perturbations more effectively and provides increased robustness and reliability in capturing different traffic patterns.
These trends hold true across various out-of-distribution datasets, although we have omitted the detailed results for brevity. This aspect is particularly valuable in scenarios where it is not feasible to train models specifically for a given location due to limited data availability. In such cases, models trained and validated on abundant data can be effectively transferred for use in different domains, such as a station with distinct characteristics.
§ CONCLUSIONS
Training deep learning (DL) models for traffic prediction that exhibit high generalizability across different datasets is of significant importance. While the objective of traffic management is to predict future traffic states at various locations, it is often not feasible to train location-specific prediction models due to limited data availability or budget constraints. Therefore, traffic prediction models with higher generalizability could be used to predict traffic conditions at locations where data is scarce.
Moreover, a significant drawback of many existing approaches is their failure to provide forecasts with uncertainty estimates, which are essential for effective traffic operations and control. In this study, a spectral normalized Bayesian LSTM is proposed for uncertainty estimation in traffic flow prediction, aiming to achieve higher generalizability. The performance of this model is compared with other baselines, including a model without normalization and one with layer normalization.
The robustness and generalizability of the proposed model were evaluated using flow data with varying degrees of dissimilarity with the training data. The results suggest that the spectral normalized model offers considerably lower uncertainty estimates and generalizes better for unseen datasets compared to the other baseline models considered.
However, the prediction performance of the spectral normalized model is sometimes marginally compromised due to the added regularization inherent in spectral normalization. The improved generalizability of the spectral normalized model can be attributed to its enhanced resilience to data perturbation, due to an upper bound on performance drop caused by such perturbations. As a result, spectral normalized models trained and validated on data from one location can be suitably transferred for use in a different domain with varying traffic patterns.
Overall, the enhanced generalizability of the spectral normalized models offers promising opportunities for deploying models across diverse traffic scenarios, ensuring reliable predictions, and minimizing the need for extensive training data at every specific location.
|
http://arxiv.org/abs/2307.04194v1 | 20230709145658 | A Search for AGN sources of the IceCube Diffuse Neutrino Flux | [
"K. McDonough",
"K. Hughes",
"D. Smith",
"A. G. Vieregg"
] | astro-ph.HE | [
"astro-ph.HE"
] |
Natural Language Instructions for Intuitive Human Interaction with Robotic Assistants in Field Construction Work
Somin Park^1^1Dept. of Civil and Env. Engineering, University of Michigan {somin, menassa, vkamat}@umich.edu., Xi Wang^2^2Dept. of Construction Science, Texas A&M University, [email protected]., Carol C. Menassa^1, Vineet R. Kamat^1, and Joyce Y. Chai^3^3Dept. of Elec. Engineering and Computer Science, University of Michigan, [email protected].
August 12, 2023
===========================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The spectrum of IceCube's high-energy neutrino flux, first detected in 2013 <cit.>, is consistent with a power-law extending from tens of TeV to a few PeV, and with flavor ratios consistent with a pion decay origin <cit.>. While searches for point sources contributing to this neutrino flux have largely produced null results, there have been significant excesses from two specific sources. Evidence for the first source, TXS 0506+056, includes a neutrino event coincident in time and space with the flaring blazar detected by Fermi and MAGIC with a significance of 3.5 σ <cit.>. More recently, NGC 1068, a nearby non-blazar active galactic nuclei (AGN), was identified as a potential neutrino source with a significance of 4.2 σ <cit.>. However, these sources alone do not explain the majority of the diffuse neutrino flux, indicating that neutrinos in this energy range are produced by a large number of extragalactic sources <cit.>.
There have been various proposals for possible sources of the diffuse neutrino flux. Gamma-ray bursts (GRBs) <cit.>, star-forming galaxies <cit.>, both blazar and non-blazar AGN <cit.>, and fast radio bursts <cit.> have all been considered. However, various models have been ruled out as the primary source for IceCube's neutrino flux. In particular, there is an observed lack of correlation with with blazar AGN, effectively eliminating flaring blazars as the primary source of the diffuse neutrino flux <cit.>. Starburst and other star-forming galaxies are also unable to account for the entirety of this signal without exceeding the measured intensity of the isotropic gamma-ray background <cit.>.
Our work in this paper consists of three analyses conducted using the publicly available data set from IceCube, containing 10 years of muon track neutrino events <cit.>. First, we present an update of an earlier analysis that used IceCube's three-year public data set <cit.> to investigate whether a significant fraction of the neutrino flux could originate from blazar or non-blazar AGN. In this updated analysis, we compare IceCube neutrino events from the 10 year catalog to sources in the Fourth Catalog of AGN detected by the Fermi Large Area Telescope (the 4LAC catalog) <cit.>, and our results are consistent with both the original work and an updated work <cit.>.
We then go on to incorporate neutrino energy information, new in IceCube's 10 year catalog, and perform an energy-dependent analysis with a likelihood formulation that includes the reconstructed energy of the neutrino, focusing on the 4LAC sources in the Northern sky using the approach outlined in <cit.>. We restrict this analysis to the Northern sky because the IceCube data set is dominated by the atmospheric muon background at declination angles less than 10^∘. Without a dedicated cosmic ray simulation to characterize the energy dependence of the muon background, we choose to only look at the Northern sky where this background is negligible. While this analysis only focuses on the Northern sky, the addition of information about the neutrino energy distribution increases the sensitivity of the search overall; previous work found that with energy information included, the total number of events needed for a 5 σ significance is reduced by a factor of two <cit.>. The same study also outlines a process that can be repeated in the Southern sky with a more robust energy reconstruction simulation that characterizes the muon background.
The third analysis presented here is a time-dependent threshold analysis of the MOJAVE XV radio catalog <cit.>. The MOJAVE XV catalog consists of observations in the 15 GHz band of 437 AGN over the course of 20 years, between 1996 and 2016. The fluctuations seen in the radio emission from these galaxies could be correlated with high-energy neutrino emission; p-γ interactions in the sources could create both neutrinos at high energies and photons at lower energies after the initial
high-energy photon is lost through a chain of pair production <cit.>.
Radio AGN in the MOJAVE radio catalog have previously been investigated as a possible source class for the IceCube neutrino flux in a time-independent analysis <cit.>, and our addition of a time-dependent likelihood function decreases the background and thus increases the sensitivity of such a search. The multiple epochs present in the MOJAVE data make it an ideal catalog for investigating a possible correlation between neutrinos and radio emission.
We apply the process outlined in <cit.> for an individual source to the entire MOJAVE catalog.
§ METHOD OVERVIEW
Each analysis presented here uses data from IceCube's data release consisting of muon track events from April 2008 to July 2018 <cit.>. The data is from the 40 string detector in 2008, the 59 string detector in 2009, the 79 string detector in 2010, and the completed 86 string detector for the remaining 7 years. The entire data set contains 1,134,450 muon track events. Each event has a reported direction, angular resolution, time of event, and “energy proxy,” which is related to the energy deposited in the detector. Each year of this data set includes an effective area for the detector as determined by simulation, as a function of declination and neutrino energy.
To test for evidence of a neutrino signal from an individual point source, we follow the approach outlined in <cit.>. The likelihood that a given source results in n_s events, out of a total N recorded in the detector, is given by:
ℒ (n_s) = ∏^N_i[ n_s/N S_i (|x_s-x_i|)+( 1 - n_s/N) B_i (sinδ_i ) ],
where S_i and B_i are probability distribution functions (PDFs), respectively. The signal PDF consists of individual spatial, energy, and time PDFs:
S_i = P^sig_i(σ_i, x_i|x_s) ·ϵ^sig_i(E_i, δ_i|γ) · T^sig_i,
where P^sig_i, ϵ^sig_i and T^sig_i are the respective spatial, energy, and time components for the signal PDF. The background PDF is defined similarly by:
B_i = P^bkg_i(σ_i, x_i|x_s) ·ϵ^bkg_i(E_i, δ_i|γ) · T^bkg_i,
where again P^sig_i, ϵ^sig_i, and T^sig_i are the respective spatial, energy, and time components for the background PDF. For the work presented in this paper, the spatial component is included in all three analyses, while the energy and time components are only used in the energy-dependent analysis and the MOJAVE catalog study, respectively. In the following sections, the PDFs used in each analysis will be described.
§ A SEARCH FOR CORRELATIONS WITH BLAZAR AND NON-BLAZAR AGN USING A SPATIALLY-DEPENDENT LIKELIHOOD
We first search for correlations with blazar and non-blazar AGN using a spatially-dependent likelihood.
We define the spatial PDF as:
P^sig = 1/2πσ^2_ie^- |x_s-x_i|^2/2 σ^2_i
P^bkg = 𝒫_B (sinδ_i)/2π,
where x_s is the direction to the source, x_i is the reported direction of the event, and σ_i is the uncertainty associated with each event, given in terms of angular resolution provided by the data set. The function 𝒫_B is equal to the fraction of events in the data set averaged across a band of ± 6^∘ in declination, δ, around a given source. We only consider sources with declination between ± 87^∘ due to the limited amount of solid angle near the poles with which to characterize the background PDF. To find the value of n_s for a given source, the likelihood function is maximized with respect to the free parameter n_s, as outlined in <cit.>.
The statistical significance of a neutrino point source over a background-only hypothesis is calculated using the following test statistic:
2Δℒ(n_s) = 2 [lnℒ(n_s) - lnℒ(0)],
where ℒ(0) is the likelihood for the background-only hypothesis in which there are no signal events from a given direction. From this, the p-value can be calculated by performing an integral over a χ^2 distribution with one degree of freedom.
We go on to constrain the fraction of IceCube's diffuse neutrino flux that originates from known classes of astrophysical objects using three different weighting hypotheses for the expected neutrino emission from a given source class:
* Gamma-Ray Scaling: The neutrino flux from a given source is proportional to the gamma-ray flux from that source. The gamma-ray flux in the 4LAC catalog is in units of photons between 1-100 GeV per area, per time. This hypothesis would be expected if the gamma-ray emission is primarily produced from hadronic interactions, resulting in a fixed ratio of photons and neutrinos.
* Geometric Scaling: The neutrino flux of a given source is proportional to 1/D_L^2. This hypothesis assumes that the neutrino luminosity of a source is only correlated to distance between the source and the detector. This calculation can only be done with sources with measured redshifts, so some sources are excluded in the analysis under this hypothesis.
* Flat Scaling: The neutrino flux from a given source is uncorrelated to any other information.
These hypotheses remain unchanged from our previous two studies <cit.>.
§.§ Results
In Fig. <ref>, we show an all-sky map of the likelihood of a neutrino point source, √(2Δlnℒ), in terms of right ascension and declination. The scan was performed in steps of 0.2^∘, and at each point we show the value of n_s that maximizes the test statistic defined in Eq. <ref>. In Fig. <ref> we show the distribution of this test statistic across the sky. When excluding the points containing NGC 1068, a 4.2 σ source discovery by IceCube <cit.>, the distribution is consistent with a normal distribution. In Table <ref>, we present the most significant locations in the sky other than NGC 1068, along with the associated likelihoods and pre- and post-trials p-values.
The 2863 sources in the Fermi 4LAC catalog are unchanged from our previous analysis <cit.> and consist of 2796 blazars and 63 non-blazars. The blazars can be further broken down into 658 flat spectrum radio quasars (FSRQs), 1067 BL Lacs, and 1071 “blazars of unknown origin.” We do not consider sources within 3^∘ of the poles.
The process follows that outlined in <cit.>. When considered individually, no source has a statistically significant likelihood after accounting for the appropriate trials factor. When comparing the qualities of the sources with the highest likelihood, we again find no clear features that set these sources apart from other sources in the 4LAC catalog more generally.
Finding no statistically significant individual sources, place an upper limit on the neutrino flux from these source classes. In Fig. <ref>, the limits from this analysis on the contribution of blazar and non-blazar AGN to the IceCube diffuse neutrino flux are shown. We separately show results for sources identified as “non-variable” by applying cuts to sources with a variability index below 18.48, as reported by the Fermi Collaboration. Of the original 2796 blazars and 63 non-blazars, we identify 1674 and 47 “non-variable” sources, respectively.
In all cases, we apply the appropriate completeness factors to account for the incompleteness of the source catalog <cit.>. For blazars, we apply a completeness factor of 1.4. For non-blazars, we apply a completeness factor of 50.6 (154.6) for non-blazar (non-variable) AGN. We place limits using the three hypotheses outlined in Section <ref>, and for two choices of spectral index. We use a spectral index of α=2.5, based on the spectral flux measured by IceCube, α=2.0, the value predicted assuming Fermi acceleration.
From the upper limits presented, we conclude that blazars can produce no more than 13% of the diffuse neutrino flux, which is consistent with our previous analysis <cit.> and a publication earlier this year <cit.>. This is a slightly more restrictive constraint than our previous result, a direct result of the increased livetime of the 10-year IceCube data. Additionally, we find that under the flat scaling hypothesis, non-variable, non-blazar AGN could produce the entirety of the IceCube diffuse neutrino flux, and that in the case of the other two hypotheses, non-variable, non-blazar AGN could still produce a majority of the flux.
§ AN ENERGY-DEPENDENT CONSTRAINT ON THE NEUTRINO FLUX FROM BLAZAR AND NON-BLAZAR AGN
In addition to the spatial-only likelihood analysis described above, we have also perfomed an analysis using an energy-dependent likelihood on data from the Northern hemisphere alone, which unlike the Southern hemisphere is not background-limited by atmospheric muons.
We use the “energy proxy” for the muon track events in the IceCube data set, and adopt the smearing matrices from the IceCube data release and a spectral index of α=2 to create a function that describes the likelihood of obtaining the reconstructed neutrino energy given the assumed spectral index, as outlined in <cit.>. This likelihood is normalized to one and its distribution as a function of energy can be seen in the left panel of Fig. <ref>. To speed up the computational process, only events with an energy proxy greater than 100 GeV were included.
Most of the events in this data set are not astrophysical neutrinos; the Northern hemisphere data set is dominated by atmospheric neutrinos, which are a background for this analysis. Therefore, we use the data set itself to construct the energy-dependent component of the likelihood, dependent on the declination of the point in the sky being tested.
§.§ Results
We create a test statistic based on the new likelihood formulation and find the updated distribution of the test statistic shown in the right panel Fig. <ref>.
This distribution falls along a normal distribution, consistent with a no significant source excesses. The most significant locations on the sky in this search are presented in Table <ref>. NGC 1068 falls just below the horizon and is thus excluded from this analysis.
We again compare against the Fermi 4LAC catalog with the same source hypotheses outlined in Section <ref>.
Because we only analyze half the sky, the number of sources in our catalog is decreased by roughly 50%; however, we also expect the background to decrease by more than 50%, as the atmospheric events in the Southern hemisphere contribute significantly to the overall background rate at the highest energies.
The limit set by this energy-dependent coincident search in the Northern hemisphere with the 10-year data set are even more restrictive than that of the spatial-only analysis presented in Section <ref>.
We use the same completeness factor as in the spatial-only analysis with an additional factor of 2 that accounts for only looking in the Northern hemisphere.
The resulting 95% upper limits are summarized in Fig. <ref> for each of the neutrino flux scaling hypotheses. At most, non-variable blazar AGN could contribute up to 9% of the neutrino flux. The improved limits for blazar AGN again suggests that the majority of the IceCube neutrino flux is likely originating from a different source class.
The limits set for non-blazar AGN are also slightly improved after folding in the event energy. Even still, non-blazar, non-variable AGN could contribute up to 95% of the IceCube diffuse neutrino flux under the flat scaling hypothesis. This suggests that non-blazar AGN are still a viable candidate source class.
§ A SEARCH FOR CORRELATIONS WITH MOJAVE RADIO AGN
We apply a triggered time-dependent analysis on the MOJAVE catalog, which is contains measurements of the radio emission from various AGN over a period of two decades, including one decade of overlap with the IceCube observatory. The time-dependent component of the likelihood T^sig is created by calculating the mean value of the flux density for all sources in the MOJAVE catalog after being scaled appropriately with respect to their luminosity distance from Earth. Then, sources are identified as flaring if their flux density at a given time is greater than 2 σ above the average flux density.
We perform the analysis using 436 radio-loud AGN from the MOJAVE catalog. Due to detector location and limits, the catalog only consists of sources higher than -30^∘ in declination. Each source in the catalog has a minimum of 5 flux density recordings at different times.
Two sources are eliminated from the analysis, MOJAVE source 0415+379 and source 2200+420, as all of their flux density recordings were above the 2 σ threshold and can thus be categorized as always flaring. These two sources could be analyzed independently of this temporal analysis. This methodology was chosen to fairly assess all of the sources in the MOJAVE catalog, which can individually have as few as five data points and as many as 30 over the two decades of data.
Once the threshold on the catalog is set, any flux density point measured under the threshold is given a time likelihood value of 0. Above the threshold, we bin the data into 0.1 year bins, as outlined in <cit.> as the smallest reasonable bin size for Radio AGN. This was selected to ensure that there is minimal likelihood overlap, as larger time bins could result in neutrinos and flaring sources being erroneously correlated.
After binning the flux density, the total flux density in all the bins is normalized to one. This creates the time-dependent portion of the signal PDF, where the likelihood for each muon track neutrino event is assigned based on the trigger time of the IceCube event. This likelihood is used in combination with the spatially-dependent likelihood described in Section <ref>. The delay between the arrival of the neutrino and the photons from the source is on the scale of seconds and can be ignored, as it is absorbed in the 0.1 year binning.
The time-dependent component of the background PDF is independent of seasonal variations <cit.>, but is dependent on the declination and year of the neutrino event. Due to the construction of IceCube during the recording of data, the first three years of data has a higher fraction of atmospheric muons in the data set than later years. Thus, there are more points in the data set at <0^∘ declination in the first three years than the rest of the catalog. Due to this, neutrino points in the first three years of the catalog are given a background PDF dependent on this difference, then divided by the number of 0.1 year time bins covered by the analysis (86).
§.§ Results
In Fig. <ref>, we show the likelihood distribution of the MOJAVE sources tested. After accounting for trials factors, no statistically significant sources are identified.
We also consider each source individually to set independent limits on the expected flux of neutrinos from each source. These limits are shown in Fig. <ref>.
The list of these sources with their reference name, coordinates, and upper limit neutrino flux density can be found in Appendix <ref>. Because we only consider individual sources, rather than source classes, no completeness factor is required.
The MOJAVE catalog includes sources in four different subclasses: quasars, non-blazar AGN, blazar AGN, and Seyfert galaxies. Across each of these subclasses, there is no obvious relationship or trend between source type and neutrino flux limit.
§ CONCLUSION
The analyses presented here utilize spatial, energy-dependent, and time-dependent likelihood functions to compare the IceCube diffuse neutrino flux to the Fermi 4LAC and MOJAVE catalogs. We confirm that blazar AGN are unlikely to explain the entirety of the IceCube neutrino flux, while non-blazar AGN are not ruled out. Adding an energy dependence to the previously described spatial analysis tightened constraints on the flux fraction for all source classes tested. The time-dependent analysis with the MOJAVE catalog sets neutrino flux limits for flaring sources broken down by source class.
While we are unable to definitively confirm a specific origin for the diffuse neutrino flux, we are hopeful that future analyses will utilize increased livetime, more complete source catalogs, and a substantive cosmic ray simulation to improve these results further. Additionally, considering specific theoretical models for source neutrino production as a function of neutrino energy could be included to further constrain possible sources. The time-dependent analysis framework could be used in tandem with other time-dependent catalogs as they become available.
§ APPENDIX: MOJAVE SOURCE LIMITS
Source Number Flux Upper Limit [s^-1cm^-2] MOJAVE Reference Source Class
1 2.12e-11 1145-071 QSO
2 1.82e-11 0106+013 QSO
3 1.75e-11 1329-049 QSO
4 1.34e-11 1149-084 QSO
5 1.29e-11 0414-189 QSO
6 1.21e-11 0203-120 QSO
7 9.37e-12 0847-120 QSO
8 8.75e-12 0752-116 QSO
9 8.72e-12 1236+077 QSO
10 7.51e-12 0859-140 QSO
11 6.58e-12 2345-167 QSO
12 5.9e-12 0027+056 QSO
13 5.69e-12 1047+048 QSO
14 5.61e-12 0607-157 QSO
15 3.96e-12 1908-201 QSO
16 3.92e-12 1504-166 QSO
17 3.48e-12 1127-145 QSO
18 2.85e-12 1341-171 QSO
19 2.8e-12 1622-253 QSO
20 2.42e-12 2328-220 QSO
21 2.36e-12 1920-211 QSO
22 2.28e-12 1244-255 QSO
23 1.93e-12 0142-278 QSO
24 1.78e-12 0256+075 QSO
25 1.76e-12 1034-293 QSO
26 1.75e-12 0742+103 QSO
27 1.65e-12 2331+073 QSO
28 1.58e-12 1551+130 QSO
29 1.54e-12 0119+115 QSO
30 1.18e-12 0229+131 QSO
31 8.45e-13 2136+141 QSO
32 7.11e-13 0710+196 QSO
33 6.03e-13 1441+252 QSO
34 4.15e-13 2113+293 QSO
35 3.84e-13 2223+210 QSO
36 3.34e-13 1324+224 QSO
37 3.33e-13 1049+215 QSO
38 2.96e-13 1611+343 QSO
39 2.96e-13 2209+236 QSO
40 2.92e-13 0716+332 QSO
41 2.76e-13 1633+382 QSO
42 2.51e-13 0109+351 QSO
43 2.36e-13 0650+453 QSO
44 2.35e-13 1638+398 QSO
Source Number Flux Upper Limit [s^-1cm^-2] MOJAVE Reference Source Class
45 2.31e-13 0110+318 QSO
46 2.22e-13 1417+385 QSO
47 2.12e-13 1758+388 QSO
48 1.94e-13 1015+359 QSO
49 1.76e-13 1030+415 QSO
50 1.66e-13 0917+449 QSO
51 1.42e-13 2005+403 QSO
52 1.36e-13 0954+556 QSO
53 1.18e-13 0859+470 QSO
54 1.18e-13 1716+686 QSO
55 1.1e-13 0850+581 QSO
56 1.09e-13 0804+499 QSO
57 1.02e-13 1602+576 QSO
58 9.27e-14 0646+600 QSO
59 8.73e-14 0224+671 QSO
60 7.52e-14 1642+690 QSO
61 7.43e-14 1532+016 QSO
62 6.92e-14 0113-118 QSO
63 6.85e-14 0837+012 QSO
64 6.8e-14 1128-047 QSO
65 6.44e-14 0906+015 QSO
66 6.43e-14 0805-077 QSO
67 6.4e-14 0015-054 QSO
68 6.37e-14 0420-014 QSO
69 6.32e-14 2258-022 QSO
70 6.26e-14 1406-076 QSO
71 5.45e-14 1118-056 QSO
72 4.51e-14 1741-038 QSO
73 4.31e-14 0440-003 QSO
74 4.15e-14 2320-035 QSO
75 4.02e-14 0336-019 QSO
76 3.15e-14 0539-057 QSO
77 2.26e-14 0743-006 QSO
78 3.35e-11 0636+680 Non-Blazar AGN
79 2.5e-11 0212+735 Non-Blazar AGN
80 2.42e-11 1959+650 Non-Blazar AGN
81 2.36e-11 0954+658 Non-Blazar AGN
82 2.29e-11 1803+784 Non-Blazar AGN
83 1.96e-11 0238+711 Non-Blazar AGN
84 1.82e-11 0454+844 Non-Blazar AGN
85 1.81e-11 1027+749 Non-Blazar AGN
86 1.48e-11 1542+616 Non-Blazar AGN
87 1.31e-11 1557+565 Non-Blazar AGN
88 1.05e-11 1726+455 Non-Blazar AGN
89 9.4e-12 0846+513 Non-Blazar AGN
Source Number Flux Upper Limit [s^-1cm^-2] MOJAVE Reference Source Class
90 8.49e-12 0128+554 Non-Blazar AGN
91 8.41e-12 0708+506 Non-Blazar AGN
92 8.08e-12 0806+524 Non-Blazar AGN
93 7.58e-12 1656+482 Non-Blazar AGN
94 6.75e-12 1011+496 Non-Blazar AGN
95 6.55e-12 2344+514 Non-Blazar AGN
96 5.9e-12 1738+476 Non-Blazar AGN
97 5.41e-12 1206+416 Non-Blazar AGN
98 4.89e-12 1652+398 Non-Blazar AGN
99 4.86e-12 1722+401 Non-Blazar AGN
100 4.85e-12 0603+476 Non-Blazar AGN
101 4.65e-12 1250+532 Non-Blazar AGN
102 4.39e-12 1823+568 Non-Blazar AGN
103 4.39e-12 0613+570 Non-Blazar AGN
104 4.21e-12 0814+425 Non-Blazar AGN
105 3.98e-12 1418+546 Non-Blazar AGN
106 3.96e-12 0912+297 Non-Blazar AGN
107 3.7e-12 0321+340 Non-Blazar AGN
108 3.43e-12 1308+326 Non-Blazar AGN
109 3.31e-12 0621+446 Non-Blazar AGN
110 2.92e-12 2023+335 Non-Blazar AGN
111 2.79e-12 1040+244 Non-Blazar AGN
112 2.42e-12 1215+303 Non-Blazar AGN
113 2.15e-12 0133+388 Non-Blazar AGN
114 1.95e-12 1404+286 Non-Blazar AGN
115 1.84e-12 0518+211 Non-Blazar AGN
116 1.56e-12 0619+334 Non-Blazar AGN
117 1.36e-12 0235+164 Non-Blazar AGN
118 1.32e-12 2201+171 Non-Blazar AGN
119 1.12e-12 0722+145 Non-Blazar AGN
120 1.05e-12 1741+196 Non-Blazar AGN
121 9.2e-13 1514+197 Non-Blazar AGN
122 9e-13 1717+178 Non-Blazar AGN
123 8.84e-13 2013+163 Non-Blazar AGN
124 7.88e-13 0859+210 Non-Blazar AGN
125 5.86e-13 1228+126 Non-Blazar AGN
126 4.65e-13 0446+112 Non-Blazar AGN
127 4.36e-13 2247-283 Non-Blazar AGN
128 4.22e-13 1940+104 Non-Blazar AGN
129 4.02e-13 0754+100 Non-Blazar AGN
130 2.85e-13 1811+062 Non-Blazar AGN
131 2.67e-13 1038+064 Non-Blazar AGN
132 2.62e-13 0823-223 Non-Blazar AGN
133 1.72e-13 0403-132 Non-Blazar AGN
134 1.6e-13 2128-123 Non-Blazar AGN
Source Number Flux Upper Limit [s^-1cm^-2] MOJAVE Reference Source Class
135 1.45e-13 0301-243 Non-Blazar AGN
136 1.32e-13 0823+033 Non-Blazar AGN
137 1.28e-13 0903-088 Non-Blazar AGN
138 1.26e-13 0845-068 Non-Blazar AGN
139 8.9e-14 0111+021 Non-Blazar AGN
140 7.62e-14 0808+019 Non-Blazar AGN
141 6.39e-14 0939-077 Non-Blazar AGN
142 6.38e-14 0723-008 Non-Blazar AGN
143 6.19e-14 1514+004 Non-Blazar AGN
144 5.18e-14 0422+004 Non-Blazar AGN
145 4.58e-14 2131-021 Non-Blazar AGN
146 3.12e-11 1849+670 Seyfert_1
147 2.91e-11 1458+718 Seyfert_1
148 2.57e-11 2043+749 Seyfert_1
149 2.56e-11 0106+612 Blazar
150 1.73e-11 2021+614 Seyfert_2
151 1.69e-11 1700+685 Seyfert_1
152 1.45e-11 1030+611 Seyfert_1
153 1.44e-11 0241+622 Seyfert_1
154 1.37e-11 0831+557 Seyfert_2
155 1.29e-11 1637+574 Seyfert_1
156 5.25e-12 1828+487 Seyfert_1
157 5.17e-12 1957+405 Seyfert_2
158 5.16e-12 0309+411 Seyfert_1
159 4.47e-12 0010+405 Seyfert_1
160 3.11e-12 0415+379 Seyfert_1
161 2.46e-12 1607+268 Seyfert_2
162 2.4e-12 1901+319 Seyfert_1
163 2.35e-12 0738+313 Seyfert_1
164 1.2e-12 0202+149 Blazar
165 9.49e-13 1345+125 Seyfert_2
166 6.77e-13 0838+133 Seyfert_1
167 5.37e-13 2141+175 Seyfert_1
168 4.48e-13 1622-297 Blazar
169 3.93e-13 0528+134 Blazar
170 2.88e-13 1502+106 Blazar
171 1.76e-13 0430+052 Seyfert_1
172 1.29e-13 1302-102 Seyfert_1
173 9.81e-14 1510-089 Blazar
174 5.87e-14 1502+036 Seyfert_1
175 4.23e-14 0946+006 Seyfert_1
JHEP
|
http://arxiv.org/abs/2307.03915v1 | 20230708063742 | Galaxy-dark matter connection of photometric galaxies from the HSC-SSP Survey: Galaxy-galaxy lensing and the halo model | [
"Navin Chaurasiya",
"Surhud More",
"Shogo Ishikawa",
"Shogo Masaki",
"Daichi Kashino",
"Teppei Okumura"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.CO"
] |
firstpage–lastpage
Phased Geometric Controls of V-Shaped Three-Level System for Zero-field Quantum Sensing
Jiangfeng Du
August 12, 2023
=======================================================================================
We infer the connection between the stellar mass of galaxies from the Subaru Hyper Suprime-Cam (HSC) survey, and their dark matter halo masses and its evolution in two bins of redshifts between [0.3, 0.8]. We use the measurements of the weak gravitational lensing signal of the galaxies using background galaxies from the Year 1 catalog of galaxy shapes from the HSC survey. We bin the galaxies in stellar mass with varying thresholds ranging from 8.6 ≤log[ M_*/(h^-2M_⊙)] ≤ 11.2 and use stringent cuts in the selection of source galaxies to measure the weak lensing signal. We present systematic and null tests to demonstrate the robustness of our measurements. We model these measurements of the weak lensing signal together with the abundance of galaxies in the halo occupation distribution framework. For each stellar mass threshold bin, we obtain constraints on the halo occupation parameters of central galaxies M_ min and σ_log M, which correspond to the halo mass at which central galaxies in the threshold sample reach half occupancy, and its scatter, respectively, along with parameters that describe the occupation of the satellite galaxies. The measurements of abundance and weak lensing individually constrain different degeneracy directions in the M_ min and σ_log M plane, thus breaking the degeneracy in these parameters. We demonstrate that the weak lensing measurements are best able to constrain the average central halo masses, . We compare our measurements to those obtained using the abundance and clustering of these galaxies as well as the subhalo abundance matching measurements and demonstrate qualitative agreement. We find that the galaxy-dark matter connection does not vary significantly between redshift bins we explore in this study. Uncertainties in the photometric redshift of the lens galaxies imply that more efforts are required to understand the true underlying stellar mass-halo mass relation of galaxies and its evolution over cosmic epoch.
galaxies: evolution – galaxies: haloes – (cosmology:) large-scale structure of Universe - gravitational lensing: weak - cosmology: observations
§ INTRODUCTION
In the standard cosmological model, structure formation in the Universe is governed by the interplay between dark matter, which enhances overdensities of matter distribution, and dark energy, which acts to hinder such growth. Dark matter halos form the basic unit of the large scale structure, and their abundance is highly sensitive to this interplay between the cosmological parameters <cit.>. The formation and evolution of galaxies in dark matter halos is a result of complex astrophysical processes related to the formation and evolution of stars, its effect on the gas, the feedback from supermassive black holes at their centers, as well as, the mergers of galaxies <cit.>. Direct inference of the connection between dark matter halos and galaxies is thus important to understand these astrophysical processes <cit.>. In turn, an accurate determination of this connection can help in the inference of cosmological parameters <cit.>.
The stellar mass contained within galaxies reflects the integrated star formation efficiency of dark matter halos of various masses. It is now well established that the star formation efficiency of halos peaks around intermediate mass halos of around 10^12 <cit.> and halos on either side of this are less efficient due to various forms of feedback associated with star formation at the low mass end and supermassive black holes at the high mass end. The evolution of the stellar mass-halo mass relation can thus provide insights into how this star formation efficiency changes with time <cit.>.
Various observational techniques have been used to probe the dark matter halos of galaxies. One of the techniques that directly probes the halo masses beyond few tens of is the inference of masses using the kinematics of satellite galaxies in dark matter halos <cit.>. Satellite kinematics however has to rely on the assumption of virial equilibrium, the anisotropy of the dispersion in the orbits of satellite galaxies in dark matter halos, velocity bias which could arise from the differences in the distribution of matter compared to satellite galaxies, and accurate determination of the interloper galaxies which could masquerade as satellites. Indirect techniques such as subhalo abundance matching <cit.> instead rely on the ansatz of a monotonous relation between the stellar mass and halo masses of galaxies, along with a scatter in addition to a fixed set of cosmological parameters which determines the (sub)halo abundances. The technique of matching these abundances to the abundance of galaxies measured as the stellar mass function, allows an inference of the stellar mass-halo mass relation <cit.>. The clustering of galaxies on large scales can also indirectly provide information about this relation <cit.> by utilizing the dependence of the large scale bias of halos on the mass of halos <cit.>.
The weak gravitational lensing signal <cit.> of galaxies provides another direct method to constrain the galaxy-dark matter connection. According to general theory of relativity, an overdensity of matter warps spacetime in its vicinity in a manner that distorts light bundles from distant background sources traveling toward us. In its weak form, gravitational lensing causes a coherent tangential distortions in the shapes of such background galaxies. The distortion in the shape of a single galaxy due to weak lensing is quite small and difficult to disentangle from the intrinsic elliptical shape of its isophotes. A statistical averaging of the shapes of many such background galaxies gets rid of the uncorrelated intrinsic shapes of galaxies and allows the measurement of the coherent shear imprinted on the background galaxies due to weak lensing. Measurements of shapes of galaxies from ground based imaging data is challenging (see e.g., <cit.>), as atmospheric light propagation and the telescope optics can also corrupt the measurements of shapes of galaxies. A number of tests need to be conducted for residual systematics in weak lensing measurements, but once modelled, the weak lensing signal can also provide constraints on the stellar mass-halo mass relation of galaxies <cit.>.
A number of ongoing weak lensing surveys cover large areas of sky with excellent quality imaging in order to map out the dark matter distribution in the Universe. The Dark Energy Survey (DES)[<http://darkenergysurvey.org>], the Kilo Degree Survey (KiDS)[<http://kids.strw.leidenuniv.nl>], and the Subaru Hyper Suprime-Cam survey (HSC)[<http://hsc.mtk.nao.ac.jp/ssp>] have covered areas that range from 1000 to 5000 sq. degree in this pursuit. Amongst these, the HSC is the deepest and thus allows us to carry out studies of evolution of the connection between galaxies and their dark matter halos that extend over a wide range of stellar masses. In this paper, we use galaxies from the HSC survey along with their stellar mass and photometric redshift estimates from their photometry in order to infer the stellar mass-halo mass relation in two redshift bins, [0.30-0.55] and [0.55-0.80].
In recent works, <cit.> and <cit.>, the clustering and abundance of galaxies have been used to constrain the galaxy-dark matter connection of the same sample of galaxies. The former amongst these studies, model their measurements of the clustering signal using an analytical halo occupation distribution (HOD) framework, while the latter use a modification to the traditional subhalo abundance matching method in order to explain the same observables. These different methodologies can explain the measurements equally well, even though they may not agree on the prescription of how galaxies occupy their dark matter halos and thus predict a different weak lensing signal. Our weak lensing signal (hereafter, WLS) measurement can thus be used as a discriminant for such theoretical models and the assumptions that they rely on.
This paper is organised as follows: We describe the lens and source data in section <ref>. Sec. <ref> describes the abundance data we use to constrain our HOD model and to study the impact of abundances on scaling relations. The formalism of stacked weak lensing signal computations and tests of survey systematics have been detailed in sec. <ref>. We summarise our theoretical HOD modelling formalism and model fitting details in sec. <ref>. Results and inferences are discussed in sec. <ref> and previous studies employing the same datasets have been compared in sec. <ref>. We finally discuss the issues and challenges associated with photometric datasets in inferring galaxy-halo connections and possible future directions of improvements in sec. <ref> and present the summary of the results from this paper in sec. <ref>.
In this paper, we assume a standard 6-parameter flat ΛCDM cosmology with cosmological parameters set by cosmic microwave background observations <cit.>. We use (Ω_m ,Ω_Λ ,Ω_b ,σ_8 , n_s ,h ) = ( 0.309, 0.691, 0.049, 0.816, 0.967, 0.677), where, Ω_m, Ω_Λ, Ω_b denote the matter, dark energy and baryonic density with respect to the critical density of the Universe, σ_8 is related to the variance of density fluctuations on scale of 8, n_ s is the power law index of the power spectrum of density fluctuations on large scales, and h is the dimensionless Hubble parameter given by h = H_0/ 100 kms^-1 Mpc^-1. All the distances are measured in comoving units of h^-1 Mpc and stellar, halo masses are expressed in units of h^-2 M_⊙ and h^-1 M_⊙ respectively. Throughout the paper, we use log to denote 10-based logarithms.
§ DATA
§.§ HSC-SSP survey
The Hyper Suprime-Cam instrument <cit.> is a wide field imaging camera (1.5 FoV diameter) mounted on the prime focus of the 8.2m Subaru Telescope located at the summit of Mauna kea in Hawaii. The Hyper Suprime-Cam survey, a Subaru Strategic Program <cit.> is a three-layered (wide, deep and ultra-deep), multi-band (grizy plus 4 narrow-band filters) imaging survey. The HSC survey has efficiently imaged ∼ 1200 sq. deg. of the sky in its wide layer, utilizing the excellent seeing conditions at the summit and the large FoV of the camera. The data is processed using a fork of the Rubin LSST science pipelines <cit.>. The processed data from the survey has been released publicly at regular intervals. The measurement of the weak lensing signal requires well calibrated measurements of the shapes of galaxies. In our work, we use the first year shape catalog made public by the HSC survey collaboration to measure the weak lensing signal.
§.§ First year HSC shape catalog
The first year HSC shape catalog is based on an internal data release of the HSC survey (S16A). It consists of wide layer data observed over a period of 90 nights between March 2014 - April 2016. It covers an area of ∼ 140 deg^2 spread over six disjoint fields - HECTOMAP, VVDS, WIDE12H, GAMA15H, GAMA09H, and XMM. The shape measurements are performed in the i-band. Therefore, the imaging in the i-band was carried out when the full width at half maximum (FWHM) for the seeing was better than ∼ 0.8. This results in the median i-band seeing FWHM of 0.58. The corresponding 5σ point-source depth of the survey is i∼26 averaged over the area covered by S16A.
The resulting data was processed with the HSC pipeline <cit.> and the shape catalog was curated by applying a number of quality flags and several selection criteria as described in <cit.>. The resultant catalog covers an area of ∼ 136.9 deg^2. The shapes of galaxies were measured using a moments based method which corrects for the effects of the PSF using the re-Gaussianization technique <cit.>. The two components of the ellipticities are given by,
(e_1, e_2) = 1-r^2/1+r^2(cos 2ψ, sin 2ψ)
where r denotes the minor-to-major axis ratio and ψ the angle made by the major axis with respect to the equatorial coordinate system.
The final shape catalog consists of galaxies selected from the full depth-full color region in all five filters. Apart from some basic quality cuts related to pixel level information, the catalog includes extended objects with an extinction corrected cmodel magnitude i<24.5, i-band SNR≥ 10, resolution factor R_2≥0.3, >5σ detection in at least two bands other than i and a cut on the blendedness of the galaxy in the i-band. This conservative selection of galaxies results in an unweighted (raw) source number density of 24.6 arcmin^-2. When lensing related weights are taken in to consideration, the effective number density of sources is ∼ 21.8 arcmin^-2, with a sample of galaxies with median redshift of ∼ 0.8. The additive (c_1, c_2) and multiplicative biases (m) in the shape measurements, as well as the RMS intrinsic distortion of shapes (e_ rms) and the photon noise component (σ_ e) were calibrated using detailed image simulations <cit.> with the software GALSIM <cit.>. These image simulations account for the survey characteristics such as the variation in depth and seeing. The shape catalog is accompanied with inverse variance weights w_ s for each galaxy which is given by
w_ s = 1/σ_ e^2 + e^2_ rms .
The shape catalog satisfies a number of systematics and null tests, with residual systematics at the level of 0.01, sufficient to carry out cosmological analyses with the data.
The shape catalog is also supplemented with six different catalogs of photometric redshifts of galaxies as inferred by a number of methods, some relying on fitting the photometry using templates, while others use machine learning <cit.>. In our analysis we use the estimates of the redshifts provided by MIZUKI code <cit.>, which uses templates of galaxy spectral energy distributions (SEDs) and priors to fit the observed photometry of galaxies. It assumes an exponentially decaying star formation history with a variable decay time scale, along with a solar metallicity for the SED templates. It also assumes that the initial mass function is Chabrier <cit.> and that the dust attenuation is given by <cit.>. Finally nebular emission lines are also added to the SEDs. In addition to various point estimates (e.g. mean, median, mode, best) and the posterior distribution functions (PDFs) of the redshift for individual galaxies, the code also outputs several physical properties, such as stellar mass and specific star formation rate of these galaxies. We will use galaxies with reliable photometric redshifts and thus restrict our source galaxy sample to those galaxies which have photoz_risk_best < 0.5.
§.§ Lens galaxies
As our lens galaxies, we will use the galaxy samples presented by <cit.> in their HOD analysis of the clustering of these galaxies. In brief, our sample excludes galaxies centered on pixels at the edge of photometric images, or affected by cosmic rays, or have saturated pixels using the following flags: flags_pixel_edge, flags_pixel_interpolated_center, flags_pixel_saturated_center, flags_pixel_cr_center, and flags_pixel_bad. We also avoid galaxies with bad fits to the SED models and remove those with χ^2/ dof≥ 3 or photoz_risk_best ≥ 0.1 to use as our lens galaxies. In addition to the above cuts already mentioned in I20, we also apply the full-depth full color mask to the lens galaxy sample, to avoid selecting lenses from regions which were not observed in all bands to the nominal depth of the HSC survey. Finally, we also apply the same star mask <cit.> as that applied to the weak lensing shape catalog (S16A) which ensures full overlap of the lens galaxies spanning 125.7 deg^2 on the sky with the source catalog.
We will focus on the first two redshift bins presented in I20 and use galaxy samples with 0.30 ≤ z_ best < 0.55 (Bin z_1) and 0.55≤ z_ best < 0.80 (Bin z_2). These subsamples have redshifts that are smaller than the median of the redshifts of the source galaxies we use for the weak lensing signals. This allows us to get better signal-to-noise ratios in our measurements. In order to select lens galaxies that reliably lie in the redshift bins of our interest, we follow <cit.> and exclude galaxies which are within one standard deviation error (as reported by MIZUKI) from the bin edges that define the galaxy samples. The redshift distribution of the samples can be seen in Fig. 2 of I20 and Fig. <ref> after applying additional quality masks as mentioned above.
We will further divide the galaxy samples in each redshift bin using M_* - the median estimate of the stellar mass posterior distribution as provided by MIZUKI. We note that <cit.> uses h=0.7 to convert h-factors in M_* and we also use the same to change stellar mass units from to whenever required. We construct stellar mass threshold subsamples within each of the redshift bins. Given the flux limit of HSC, we do not use galaxies with stellar masses below 10^8.6 and 10^9 for the redshift bins z_1 and z_2, respectively. For bin z_1 (z_2) we make 13 (12) stellar mass threshold subsamples, whose statistics are listed in Table <ref>.
§ ABUNDANCE OF GALAXIES
We adopt the measurements of the abundance of galaxies as reported in I20 in order to adopt consistent abundances while comparing the results of the clustering analysis with those obtained from weak lensing. In their work, I20 compare their estimates of the SMF of photometric MIZUKI-HSC galaxies in bins of MIZUKI stellar masses and redshifts with those obtained using a multi-band, multi-survey data available in COSMOS/UltraVISTA field over 1.62 deg^2 sky area with a K_s-band limit of 23.4 mag (90% complete). This allows I20 to infer the completeness of the photometric HSC galaxy sample. They also computed the abundances of MIZUKI galaxies in stellar mass threshold bins[I20 and M13 abundances are available at: <https://github.com/0Navin0/galaxy_halo_connection_in_HSC/tree/main/abundances>].
Abundances of galaxies derived from photometric galaxy catalogs are prone to errors and systematics due to modelling uncertainties in their redshift and stellar mass estimates. These uncertainties in photometric redshifts are also expected to be correlated, a systematic error which results in a higher (lower) redshift for the galaxy, will also end up in systematic error which assigns a higher (lower) stellar mass to the galaxy. Errors in photometric redshifts also potentially translate into errors in the abundance. To reduce the systematics related to photometric redshifts on the abundance estimates, I20 carry out a `trimming' procedure in their section 2.3.2, which removes galaxies at the redshift bin edges with uncertain redshifts. This results in a loss of volume, but can improve the reliability of the lensing measurements, by keeping galaxies which have a higher probability of being in a given redshift bin. As the photometric measurement errors and the associated photometric redshift errors are expected to increase for fainter galaxies, this trimming method is nevertheless expected to systematically affect the abundances of fainter galaxies. The comparison with COSMOS/UltraVISTA in I20 is designed to keep a tab on such effects.
In order to study the impact of varying the abundances of galaxies, we will also carry out our analysis using the abundances that we compute from the best fit Schechter function models to the observed SMFs of galaxies from UltraVISTA in <cit.> and label them as M13 abundances[4] .
In their study, M13 provide single and double Schechter fitting functions for SMFs of galaxies in two redshift bins [0.20,0.50)z^'_1 and [0.50,1.00)z^'_2 which are closer to our original redshift bins z_1 and z_2, respectively. We plot and compare the I20 and M13 abundances as a function of the stellar mass thresholds in Fig. <ref>.
The abundance of central galaxies is related to the abundance of dark matter halos via their halo occupation distribution. In general, galaxies in a catalog do not necessarily come with a label of central or satellite. Although algorithms to group galaxies together exist, the large errors in photometric redshifts imply that it is increasingly difficult to do so in photometric surveys. Therefore, we use a relatively large 15% error on the abundance of galaxies for the I20 and M13 abundance measurements, so that they do not excessively drive the constraints on the halo occupation distributions. This has the effect of increasing the effective weight of our lensing signal to drive the halo occupation constraints. As mentioned before we will explore how the use of abundance changes the constraints of the stellar mass-halo mass relation we obtain.
§ WEAK GRAVITATIONAL LENSING
Weak gravitational lensing induces statistically coherent, tangential distortions in the shapes of background galaxies due to the intervening matter distribution along the line-of-sight towards the background galaxies. The tangential component of the shear γ_t imparted by an intervening matter distribution is related to its excess surface density (ESD) such that
ΔΣ(R) = Σ(<R) - ⟨Σ⟩ (R) = ⟨γ_t⟩(R) Σ_ crit(,) .
Here Σ(R) is the lens surface mass density at a projected separation R from the lens centre at redshift , Σ(<R) denotes the surface density averaged within a circular aperture R from the lens centre, and ⟨Σ⟩ (R) is the surface density averaged azimuthally at a distance R. The quantity Σ_ crit(,), is a geometrical factor dependent upon the physical angular diameter distances between us (observer) and the lens D_ a(), us and the source D_ a() and between the lens and source, D_ a(,), and is given by,
Σ_ crit(,) = c^2/4π GD_ a()/D_ a() D_ a(,) (1+)^2≡Σ_ crit, ls .
The factor of (1+z_ l)^2 in the denominator corresponds to our use of comoving coordinates. The intrinsic shapes of galaxies contribute to the noise in the determination of this shear from the measured ellipticity of galaxies. Therefore the signal has to be measured statistically by averaging the tangential ellipticity over a large sample of galaxies using weights that yield a minimal variance estimator for ΔΣ. For every lens-source pair, we use the weight = ⟨Σ^-1_ crit, ls⟩^2 while performing this average, where is the weight due to error in the shape measurement defined in equation (<ref>). The weight w_ ls defined above automatically down-weights lens-source pairs which are separated by a small distance from each other.
We will use full PDF of the redshift (z-PDF) of each source galaxy and z_ best estimate of the redshift of each lens galaxy as provided by the photo-z estimation code, and compute the average of the inverse critical surface mass density for each lens-source pair ⟨Σ^-1_ crit, ls⟩ given by,
⟨Σ^-1_ crit, ls⟩ = 4π G (1+)^2/c^2∫_^∞D_ a() D_ a(,)/D_ a() p() d .
The minimum variance estimator for ΔΣ is given by
ΔΣ(R) = 1/1+m̂( ∑_ ls e_ t,ls ⟨Σ^-1_ crit, ls⟩^-1/2ℛ∑_ ls.
-.∑_ ls c_ t,ls ⟨Σ^-1_ crit, ls⟩^-1/∑_ ls) ,
where e_ t,ls and c_ t,ls are the tangential components of ellipticity and the additive bias for the source galaxy in a lens-source pair, respectively. The quantity m̂ is the sample-averaged multiplicative bias and is given by
m̂ = ∑_ ls m_ ls/∑_ ls .
The symbol ℛ denotes the ensemble responsivity of the measured distortions to a small shear <cit.> and can be computed using the RMS intrinsic shape distortions e_ rms provided in the catalog as,
ℛ = 1 - ∑_ ls e^2_ rms,ls/∑_ ls .
In addition, to minimize effects of the uncertainty in the photometric redshifts, we use only those source galaxies which satisfy
∫_z_ l, max + z_ diff^∞ p() d > 0.99 ,
where z_ l, max is the maximum redshift in a lens galaxy sample, and we use z_ diff=0.1 in our work. This selection implies that based on the posterior of the redshift from the photometry, the source galaxies we use have a >99% probability of having a redshift greater than the farthest galaxy in the lens sample. Thus they are more likely to be true background galaxies. Even after applying this photo-z filter, source galaxies can still be contaminated by structures correlated to lenses if the posteriors p() are biased. Therefore, we will quantify any such contamination by looking for source galaxies clustered with our lens galaxies.
The shape noise of galaxies constitutes a dominant component of the error budget on small separation scales between the lens and the source, as the number of lens-source pairs at such separations are small in number. The error on the weak lensing signal measured around a sample of galaxies at various projected radial bins can be expected to be correlated as the same source galaxy may be used for the lensing signal around different lens galaxies in the sample. Such covariance between the measurements which arises due to shape noise can be quantified by randomly rotating the source galaxies and measuring the weak lensing signal around the lens galaxies. This preserves the number of pairs but presents a random realization of the source population ellipticities. However, on large scales we also expect the covariance due to the large scale structure. The large scale over-densities in which the lens galaxies reside can coherently shift the measurements up or down, leading to a larger covariance on such scales than that expected from just the shape noise.
We account for the above sources of noise together using the jackknife technique, where we divide the full survey area of the lens catalog in to 103 rectangular jackknife regions, each having an approximately equal area ∼ 1.22 deg^2, distributed contiguously in each survey field. We utilize the random catalog of points provided by the HSC survey, which have a uniform density of 100 random object per square arc minute, and where we can apply the same exact mask that we applied to our lens samples. Throughout this work, the jackknife sub-division of area remains identical for all the subsamples in each redshift bin. We then measure the lensing signals by excluding each region from the entire data at a time. We use these measurements to compute the covariance matrix 𝒞,
C_ij = N-1/N∑ _k=1^N[ ΔΣ(R_i,k) - ΔΣ(R_i) ] [ΔΣ(R_j,k)- ΔΣ(R_j) ] .
Here the indices i,j both vary from 1 to 10 for the 10 projected radial bins, ΔΣ(R_i,k) is signal computed at i^ th projected radial bin with removal of k^ th jackknife chunk, the quantity with bar on top is an average of the jackknife measurements at a particular radial bin.
We also define the cross-correlation matrix of the measurements between the i^ th and the j^ th projected radial bins to be given by
r_ij = 𝒞_ij/√(𝒞_ii𝒞_jj).
The cross-correlation matrix of the measurement for a representative set of stellar mass threshold samples in each of the redshift bins is shown in the different rows of Fig. <ref>. As expected we see that on small scales the off-diagonal components of this matrix are close to zero, however as we approach larger scales, neighbouring radial bins show enhancement in the cross-correlation of their errors.
Next, we present the results of two null-tests of survey systematics. First, we present the measurement of the weak lensing signal (ΔΣ_ rand) around random points which are distributed in the HSC footprint in the same manner as our lens galaxies. Second, we present the cross-component (ΔΣ_ rand, ×) around the random points and the lens galaxies (ΔΣ_ lens, ×); where ΔΣ_× averages the cross-component of the shear which is the ellipticity induced on circular objects with major/minor axes at 45^∘ compared to the line joining the two galaxies. In the absence of systematics, both these measurements should be consistent with zero within the statistical uncertainty.
In order to measure the signal around random galaxies, ΔΣ_ rand, for a given threshold subsample, we resample the photometric reshifts, z_ best, with replacement from the overall redshift distribution of galaxies in that subsample and assign them to the objects in the random catalog. We follow the procedure described by equations (<ref>) - (<ref>) and compute the tangential component of the weak lensing signal ΔΣ_ rand. We subtract this measured signal from the weak lensing signal around lenses from the true subsamples. Our tests indicate, however, that the measurements around random points for each of our subsamples is consistent with zero given the statistical fluctuations. The measurements ΔΣ_ rand and cross-components ΔΣ_ rand, × around random points, as well as the cross-components ΔΣ_ lens, × around lens galaxies along with their jackknife errors are shown in Fig. <ref> for the lowest, a middle and the highest stellar mass threshold, respectively. The p-values to exceed χ^2 for all of our subsamples for both the systematics tests are presented in Table <ref>.
In spite of our conservative sample selection cuts and quality filters (Section <ref> and equation <ref>) in lens and source galaxies, source galaxies can still be contaminated by structures correlated to the lens distribution. These source galaxies may not be even down weighted by the lensing weights if their p() is biased to high redshifts. This effectively dilutes the lensing signal as a function of projected radius. However, the overall dilution can be estimated and adjusted for by multiplying a boost factor to the signal (see e.g., <cit.>). Boost factor, B(R_i) is defined as the ratio of weighted number of l-s pairs per lens galaxy to random-source (r-s) pairs per random point, notationally,
B(R_i) = N_r ∑_ ls/N_l ∑_ rs w_ rs.
We adjust the randoms corrected signals by their corresponding boost factors, in each of the ready-to-model signals and their jackknife covariances. The estimated boost factors for few of the threshold bins are described in Fig. <ref>. The errorbars on B(R) values are computed by the jackknife technique outlined by equation (<ref>). Apart from few smallest projected scales in most massive galaxy samples that we probe, redshift bin z_1 shows boost factors consistent with unity, indicating presence of a non-zero but small amount of source contamination close to the inner-most radial bin; while the redshift bin z_2 shows a consistent contamination of source galaxies at all scales with B(R) ranging from ∼ 4% at inner most radial bin to ∼ 1% around outer most radii. The application of boost factor scales the signal as a function of R and may affect the covariances, however the relative error in the signal remains the same. The relative errors of the signals in bins z_1 and z_2 evolve slowly from ∼ 5% to ∼ 10% in subsamples of increasing threshold stellar mass within log M_ *, limit= (8.6 - 10.8) and (9.0 - 11.0) respectively. The most massive threshold subsamples in each redshift bin have ∼ 15% relative error. Given this level of statistical tolerance, we confirmed that skipping application of boost factors don't change our parameter constraints and thereby the resulting inferences, however, to maintain uniformity throughout our current and future analyses, we include boost factors on weak lensing signal measurements for all subsamples.
Also the photometric redshifts of the galaxies may have both statistical uncertainties and systematic biases. Such uncertainties could cause galaxies that are physically correlated with the lens samples to be included in our source samples, or could cause source galaxies to be wrongly classified as lens galaxies, or could result in background galaxies getting assigned wrong redshifts. The first of these errors are accounted for using boost factors as described in the paragraph above.
We mitigate the second error by using stringent cuts on the choice of source galaxies in this analysis, such that the fraction of source galaxies getting identified as lenses are small. Thus the bias in lensing signals will come mostly from source galaxy photometric redshifts being inconsistent with their true redshifts. We examine this effect using the methodology outlined by <cit.>[See appendix.]. We find that the source photo-z biases in bins z_1 and z_2 are ∼ 1% and ∼ 4% respectively and we have confirmed that such level of biases do not change our results or any of our conclusions in a statistically significant manner. Consequently, we have ignored the photo-z bias correction in our measurements and modelling of the weak lensing signals in this paper.
The weak lensing signals as measured using the above techniques can be seen in Figs. <ref> and <ref> for the two redshift bins we consider in this paper, respectively. The errors on the data points are based on the square root of the diagonal elements of the covariance matrix as defined in equation (<ref>).
§ THEORETICAL MODELLING
We use a halo occupation distribution (HOD) framework in order to model abundance and weak lensing signal. The HOD framework allows us to relate the theoretical predictions of the abundance of dark matter halos, their clustering and the dark matter distribution around these halos to the observed abundance and lensing of galaxies. The various parameters of the HOD of galaxies describe the average number of galaxies, N(>M_*, limit|M), in a particular sample of stellar mass threshold M_*, limit that reside in halos of mass M. In this work we only work with galaxy samples in thresholds of stellar masses, hence for ease of notation we denote it simply as N(M). We separate the total HOD of galaxies into a separate term for central galaxies and satellite galaxies, denoted by N_ c(M) and N_ s(M), respectively, such that,
N(M) = (M) + (M)
We use a 5-parameter model to describe these separate terms <cit.>,
(M) = 1/2[1+ erf(logM - log/)] ,
(M) = (M) ( M-M_0/M_1)^α,
where , , M_0, M_1, α are free parameters which are allowed to vary freely for each threshold subsample. Given that apart from an unknown intrinsic scatter, the relation between central galaxies and their halos is also obscured by uncertainties in the measured signals, we include the total scatter in the host halo masses of the central galaxies by a stochastic model expressed as equation (<ref>). Assuming that each central halo hosts a single galaxy, the first equation denotes the probability that a halo of mass M hosts a galaxy belonging to threshold subsample. According to the functional form, denotes the mass at which half of the halos are occupied by galaxies above the stellar mass threshold of subsample under consideration. Asymptotically, the halo occupation of central galaxies tends to unity. The satellite galaxy halo occupation number is a power law in M-M_0, where M_0 is the mass scale below which there are no satellite galaxies. M_1 can be seen as a typical halo mass to host a satellite galaxy, the exponent α as an indicator of the accumulated star formation history for galaxies of the given mass threshold. The (M) in front of the satellite halo occupation number down weights the galaxies in halos with low . Formally, we treat the two halo occupations to be independent, given that there are cases in which central galaxies are not necessarily the brightest galaxies in their halos.
Further, we also need to specify the position of the central galaxies within the dark matter halos. We assume that the central galaxy resides at the center of the dark matter halo. In our fiducial model we assume that satellite galaxies are distributed according to the NFW profile,
n(r) ∝( r/r_ s)^-1( 1 + r/r_ s)^-2
where r_ s is the scale radius of the halo and is defined as r_ s = r_ 200m/c_ 200m. Here c_ 200m is the concentration of the dark matter within that halo and halo masses are defined to be the masses enclosed within an overdensity of 200 times the background matter density, denoted by M_200m.
The abundance of galaxies in the threshold sample can be computed from the HOD using
n_ gal = ∫ M n(M) [(M) + (M)].
We use the analytical framework presented in <cit.> in order to predict the weak lensing signal from the HOD. Here we briefly repeat the formalism for the sake of completeness.
The ESD profile, equation (<ref>), depends on the correlated surface density of matter which is a line-of-sight projection of the galaxy-matter cross-correlation function ξ_ gm at a halo-centric distance R such that
Σ(R) = ρ̅∫_0^∞ z ξ_ gm([ R^2 + z^2]^1/2) .
Here, we have ignored the uniform density component in the computation of the surface density as it does not impact the weak lensing observables. We have also ignored any possible off-centering of central galaxies. Current modelling assumes that each halo hosts exactly one galaxy at its center
and that the dark matter contributions from subhalos of the satellite galaxies can be safely ignored. The cross-correlation is a Fourier transform of the cross power spectrum between galaxies and dark matter and can be computed using the analytical framework developed in <cit.>.
The total cross power spectrum between galaxies and dark matter can be divided in to 4 different terms, the one halo central and satellite terms, and the two halo central and satellite terms, such that,
P^ gm(k) = P^ 1h_ cm(k) + P^ 1h_ sm(k) + P^ 2h_ cm(k) + P^ 2h_ sm(k) .
Each of these terms can be expressed as
P^ 1h_ xm = ∫ n(M) M H_ x(k, M, z) M/ρ̅ u_ h(k| M, z) ,
P^ 2h_ xm = ∫ n(M') M' H_ x(k, M', z) ×
∫ n(M̃) dM̃ Q(k|M',M̃,z) M̃/ρ̅ u_ h(k| M̃, z) ,
and `x' stands for either central `c' or satellite `s', Q(k|M',M̃, z) describes the cross-power spectrum of halos of mass M' and M̃ at redshift z, and we use
H_ c(k|M, z) = (M)/n̅_ gal
H_ s(k|M, z) = (M)/n̅_ gal u_ s(k|M,z) .
In the equations above, u_ s/h(k|M,z) denotes the Fourier transform of the number density profile of the satellite galaxy (dark matter) distribution within the halo. As indicated previously we assume this to be given by the NFW profile. We allow the satellite and dark matter concentration to vary from the form given by <cit.> to allow for systematic uncertainties due to baryonic effects, as well as effects of averaging the dark matter profiles of halo of the same mass but varying concentrations <cit.>. We implement this with a multiplicative parameter c_ fac which alters the fiducial concentration-mass relation that we adopt in this paper. We include a Gaussian prior with unit mean and a variance of 0.2 for this parameter.
The baryonic component within the galaxy is expected to dominate the weak lensing signal at small projected separations. We model this component as a point mass contribution similar to how it has been modelled in previous studies <cit.>,
ΔΣ_b(R) = M̅_ bary/π R^2 ,
where, M̅_ bary represents average baryonic mass of all the galaxies in a given threshold subsample. We restrict our measurement of the lensing signal to scales above 100, thus our measurements are not very sensitive to the baryonic component (10 percent of the signal at the innermost point for the largest stellar mass bin). Given this relative insensitivity of our results to the baryonic contribution, we simply model this term as the average of the stellar mass contribution of all galaxies within the bin of interest. The total modelled signal is then the sum of ESD due to dark matter-halos and the central baryonic component.
§.§ HOD model fitting specifications
We carry out a Bayesian analysis to infer the posterior distribution of model parameters given the data, P(Θ|D, I), such that
P(Θ| D, I) ∝ P( D|Θ, I) P(Θ| I) ,
where I represents the choice of our model, the quantity P( D|Θ, I) is the likelihood of the data given the model parameters. and P(Θ| I) the priors on our model parameters. We assume the likelihood to be a multi-variate Gaussian, such that
ln P( D|Θ, I) ∝χ^2(Θ;𝒟, I)/2 ,
χ^2 = ∑_ i,j [ ΔΣ - ΔΣ ]_ i^ T [𝒞^-1]_ ij [ ΔΣ - ΔΣ ]_ j + ( n_ gal - n_ gal)^2/σ^2_ gal ,
where, the terms with tilde on top are modelled while those without tilde are observed quantities, subscripts i,j stand for the i^ th and j^ th radial bins respectively, and the covariance matrix, 𝒞, is obtained from jackknife technique discussed in Section <ref> (equation <ref>). We assume uniform priors on most of our parameters (see Table <ref>), unless mentioned otherwise.
We use the analytical HOD modelling framework from <cit.> as implemented by the software aum <cit.> in order to predict the abundance and galaxy-galaxy lensing predictions given the HOD parameters. We sample the posterior distribution of our parameters given the measurements using the affine invariant MCMC ensemble sampler of <cit.> as implemented in the publicly available package emcee v3.1.1 <cit.>. We use 256 walkers for a total of 10000 steps. We remove the first 2000 steps from each walker as a burn-in phase and verify the stationarity of our parameters of interest to confirm convergence.
§.§ Model predictions
In addition to modelling the observables, the ΔΣ and the abundances, we also compute predictions of satellite fractions,
= ∫ M n(M) (M)/N
and average central halo masses,
M_ cen = ∫ M M n(M) (M)/N_ c
for each threshold subsample accounting for the full sampled posterior distributions. Where N=N_ c+N_ s is the total number of galaxies computed for a given subsample and N_ x=∫ M n(M) N_ x; `x' stands for either `c' or `s'.
§ RESULTS AND DISCUSSION
We measure the weak gravitational lensing signal for stellar masses from log[M_*/() ] ≥ 8.6 in z ∈ [0.3, 0.55] and log[M_*/() ] ≥ 9.0 in z ∈ [0.55, 0.80]. Our measurements for the different threshold bins at the two different epochs are shown as black circles in Figs. <ref> and <ref>, respectively. The errors on the points are the square root of the diagonal of the error covariance matrix for each measurement. The figures show R ΔΣ as a function of the projected separation from the lens galaxies and we list the SNR of the measurments in the lower right boxes in each of the subpanels. The weak lensing measurements in each of the redshift bins clearly show that the weak lensing signal increases in strength for lens galaxies with a higher threshold in stellar mass. The lensing signal also show deviation from ΔΣ∝ R as would be expected for a simple isothermal profile.
§.§ HOD modelling of the abundance and lensing signal
We fit the analytical HOD model to each of the measurements described above and obtain the posterior distribution of the parameters of our model given the measurements. The priors that we use on the parameters for our analysis are listed in Table <ref>. The solid magenta lines in Figs. <ref> and <ref> and the associated grey shaded regions indicate the best fit model and the 68, 95 percentile credible intervals using the parameters given the joint fit of the lensing and I20 abundance measurements in the two photometric redshift bins z_1 and z_2, respectively. The best fit χ^2 value obtained from our measurements alongwith the number of degrees of freedom based on the formalism of <cit.> are also indicated in the boxes on the lower right in each of the subpanels.
We decompose the best fit model we obtain into components that correspond to the 1-halo central and 1-halo satellite term, in addition to the 2-halo term indicated by the solid red, solid orange and dotted green lines, respectively. The baryonic contribution to the lensing signals is quite small and we artificially have boosted it ten times its value for clarity and shown it with a dashed line. The 1-halo central component dominates in the innermost regions upto a few hundred kiloparsecs, followed by the rising 1-halo satellite component as we move further out.
The increasing amplitude of the observed lensing signal can be fit with a consistently rising 1-halo central component. Statistically this indicates that central galaxies with higher stellar masses live in more massive dark matter halos. The satellite component corresponds to halos which are more massive than that of centrals. These measurements and our modelling allow us to infer the stellar mass-halo mass relation for the central galaxies together with the satellite fractions in each of our subsamples, and these are a reflection of the scale dependence of the measured weak lensing signal.
Our results indicate that a simple dark-matter only HOD model in ΛCDM cosmology is flexible enough to describe the observed lensing and abundance measurements in each of the threshold stellar mass bins. The best fit χ^2 values corresponding to joint fits of weak lensing with either I20 or M13 abundances are listed in Table <ref>. We obtain similar values for χ^2 despite large differences in abundances between I20 and M13, which hint towards a potential degeneracy among HOD parameters when fitting weak lensing and abundances. Even though they appear statistically consistent, we see some evidence that I20 is better fit than M13 in low threshold mass subsamples for the z_1 bin, while M13 is better fit for high and low mass thresholds in z_1 and z_2 bins respectively.
The two-dimensional marginalized posterior distributions[The posterior distributions for two stellar mass thresholds chosen to be representative at each redshift bin have been made available online in the appendix.]
of free parameters show familiar degeneracies in the central halo occupation parameters and , where an increase in one parameter can be compensated by a corresponding increase in the other parameter. We will discuss the dependence of these degeneracy on our different observable in the following subsection. The satellite parameters are often ill-constrained with a wide variety of satellite parameters leading to similar observables. The constraints on the free parameters of the HOD model along with the inferred satellite fraction, abundances and the for each of the threshold bin in two redshift bins are listed in Tables <ref> and <ref>, respectively.
§.§ Degeneracy among central HOD parameters and abundance
Using the posterior distribution of the HOD parameters in our fiducial analysis, we examine the degeneracy between the central HOD parameters and its dependency on the weak lensing and the abundance, separately. The estimates of the abundance of galaxies differ between I20 and M13, and therefore can lead to different constraints on the HOD parameters. Therefore, we fit the HOD model to these observables individually and in combination to demonstrate the impact of using either of these abundance measurements. In Fig. <ref>, we present the resulting degeneracy contours between central HOD parameters corresponding to each of these observables. The 68 percent credible regions from the weak lensing only fit, the I20 abundance only fit and the M13 abundance only fit are shown with black, blue and red contours, respectively. The orange and the green correspond to the joint fits between weak lensing and the abundances from I20 and M13, respectively. The different subpanels correspond to lens subsamples with different stellar mass thresholds for bin z_1, chosen for illustrative purposes.
The central HOD parameters, log and for each of the observables individually are degenerate with each other and a positive change in one can be compensated by a positive change in the other parameter. This can be understood as follows. The abundance of halos is a decreasing function of halo mass. Thus increasing , in general would lead to smaller abundance. However, this can be compensated by increasing the scatter . The scatter allows galaxies to be populated in the more numerous less massive halos, thus satisfying the observed abundance. The relative shift in the degeneracy contours between the contours corresponding to I20 and M13 reflect the smaller abundance of galaxies inferred by I20 compared to M13. The weak lensing signal on the other hand is sensitive to the average halo masses of the lens samples. Thus an increase in which would nominally increase the average halo masses, can be compensated by increasing the scatter which brings in lower halo masses, thus compensating the increase. At the highest stellar mass threshold, the degeneracy contours become flatter due to the exponential decrease in the halo abundances at the high mass end. Even though the degeneracies in the - are qualitatively similar, the different dependencies of the abundance of halos and their average halo mass on the HOD parameters implies that the quantitative degeneracies have different directions. The combination of the abundance and weak lensing shown in orange/green contours thus results in tighter constraints on each of the central HOD parameters, which otherwise cannot be constrained by either of the observables on their own.
We also observe that weak lensing prefers somewhat higher values for log M_ min than just the abundance information alone until stellar mass thresholds of 10^10.2 (10^10.4) in the redshift bin z_1 (z_2) where the lensing and abundance contours start to cross-over. The exact location of this stellar mass depends upon which study we use the abundance information from. In general, we expect that adding in the abundance information will lead to lower values of than just from weak lensing at the low stellar mass threshold end. At the high stellar mass threshold, the inclusion of abundances can have a non-negligible impact on the inferred value of as can be seen from the relatively flat degeneracy contours in the - plane in the right hand subpanel of Fig. <ref>.
§.§ Galaxy-halo connection
The galaxy-halo scaling relation that we obtain from our joint analysis of weak lensing and the abundance of galaxies can be summarized with the dependence of on the stellar mass threshold log M_*, limit. The parameter corresponds to the mass at which half of the halos are occupied by galaxies in a given stellar mass threshold sample. This implies that the scaling relation between and log M_*, limit can be interpreted as the median of the stellar mass of galaxies at fixed halo mass. We show these constraints for our two redshift bins in different panels of Fig. <ref>. Our fiducial constraints are shown as credible intervals using light (95 percent) and dark grey (68 percent) shaded regions that correspond to the use of our weak lensing measurements combined with abundance measurements presented in I20. If instead, we combine with the abundance measurements from M13, we obtain constraints shown in blue points (median) with errors (68 percent credible interval). As discussed in the previous section, the smaller abundance inferred in I20 compared to M13 leads to a higher when we use abundance from I20 for redshfit bin z_1. In contrast for redshift bin z_2, the abundance of galaxies inferred in I20 and M13 both are roughly equivalent (see Fig. <ref>) and thus the inferred is similar irrespective of which abundances we combine with the weak lensing signal.
In both redshift bins, we observe a scaling relation which shows that 10^12 dark matter halos are most efficient at forming stars and become increasingly inefficient as we move away. At lower mass side the inefficiency of star formation manifests in the stellar masses dropping down precipitously to smaller values, while at the high mass end it is seen in a quick rise in halo masses that is required to form more and more massive galaxies. Qualitatively, this picture is consistent with previous studies. We present the comparison of the parameters and obtained from our analysis when combining our weak lensing measurements with the two different abundance estimates in the first two panels of Fig. <ref>. If taken at face value our results in left panel do not indicate a large evolution in the scaling relation within the two redshift bins, especially if we consider the abundance measurements from M13. However, the abundance measurements from I20 indicate that halos of same mass at lower redshift host galaxies with a median stellar mass which is lower by about 0.2 dex.
The scatter in our HOD parameterization captures the scatter in halo masses of galaxies that have stellar mass at the threshold chosen for our sample. We observe in middle panel of Fig. <ref> that this scatter increases as we increase our threshold to include only massive galaxies. In models which have a fixed scatter in stellar mass of galaxies at fixed halo mass, such behaviour is expected. The slope of the log M_*-M relation is quite shallow at the high mass end, and thus a constant scatter in the stellar masses at fixed halo mass results in a scatter in halo masses that continues to increase with the stellar mass. Our results are therefore qualitatively consistent with studies that indicate such a constant log-normal scatter in stellar masses at fixed halo mass σ_log M_* <cit.>. These trends are consistently observed irrespective of which abundances we use and the redshift bin under consideration.
Previously, we have shown that the parameters is degenerate with and the posterior constraints on are very much dependent on the choice of the abundance measurements, especially in the first redshift bin. The weak lensing signal is expected to be sensitive to the average mass of halos occupied by galaxies in our sample. Given that the small scale weak lensing signal is well measured and is dominated by the 1-halo central term, we expect the average central halo masses to be well determined by the lensing signal for every threshold stellar mass bin. In Fig. <ref>, the blue (orange) shaded region with slanting lines shows our constraints on from weak lensing measurements only. The solid blue (orange) shaded region corresponds to the 68 percent credible intervals derived from a joint fit between lensing and abundance from I20 for redshift bin 1 (2), while the blue (orange) solid points with errors correspond to a similar joint fit but using abundance measurements from M13. While both and have physical meaning, it is clear that M_ cen better reflect the results of our weak lensing measurements and is relatively insensitive to the exact choice of abundance.
We compare the M_ cen obtained for the two redshift bins in Fig. <ref>. When compared in this manner, we see very small differences in the redshift evolution of the scaling relation. The differences seen in the and compensate to result in a scaling relation between M_ cen as a function of the stellar mass threshold that shows very little evolution over the two redshift bins we use.
§.§ Satellite fraction
The weak lensing signal in the innermost regions is dominated by the dark matter halo of the central galaxies in each of our stellar mass threshold subsamples. Some of the galaxies in our subsample are also expected to be satellites. These satellites on average are expected to reside in more massive parent halos than halos hosting centrals of similar stellar mass. However these satellite galaxies do not reside at the centers of their parent dark matter halos, but are distributed within the halo. This signal from the satellite galaxies is thus expected to be a result of convolution of the weak lensing signal expected around the centers of their parent halos with the projected number density of satellite galaxies within the halo. The weak lensing signal at intermediate scales is thus sensitive to the fraction of satellite galaxies within the stellar mass threshold sample as well as the halo occupation distribution of the satellite galaxies in the subsample.
In Fig. <ref>, we show the fraction of satellite galaxies as a function of the stellar mass threshold of our subsamples. The solid blue (orange) colored shaded region shows the 68 percent credible region for the satellite fraction for redshift bin 1 (2) when using the weak lensing measurements along with abundance measurements from I20. The region shaded with slanted line but with the same colors correspond to the case when the lensing measurements alone are used as constraints. To maintain clarity, we do not show results when using M13 abundances here as they are essentially similar within the errors.
Overall, the observations suggest that the satellite fractions decrease as a function of the stellar mass threshold above 10^10 for both redshift bins. There is tentative evidence of a flattening of the satellite fractions at lower stellar mass threshold for redshift bin 1. We do not find significant evidence for the evolution of the satellite fractions with redshift given the large errors in our inference, nor do we find a significant difference depending upon which abundance constraints we use.
§ COMPARISON WITH PREVIOUS STUDIES
As mentioned in Section <ref>, we examine the results from the two studies, I20 and M22, of clustering and abundance of galaxies from the same samples we use in the paper, against our inferences which are driven by the measured weak lensing signals and abundance estimates from I20. This comparison is well suited even in the photometric observable plane due to use of the same dataset. To briefly summarize the results and approaches of these two studies, I20 modelled the measured projected 2-point correlation functions ω (θ), and the measured abundances of galaxies using the same HOD parameterization as used in our modelling scheme.
On the other hand, these same measurements were modelled by M22 using a modified subhalo abundance matching technique using cosmological simulations from the Uchuu suite <cit.>, namely mini and shin. Amongst the two, mini-Uchuu has larger box size of 400 with particle resolution of 3.27× 10^8 and shin-Uchuu has larger resolution of 8.97× 10^5 with box size of 140. Comparison between the two simulations allows us to test the effect of the resolution. In their paper, M22 explore two different proxies of halo mass which monotonically correlate with the stellar mass of galaxies (albeit with a scatter). The first approach uses the traditional peak maximum circular velocity (V_ peak), while the second one utilizes the halo mass of the progenitor of the subhalo at a prior redshift (M_ prog). The constraints presented by M22 correspond only to the first redshift bin.
We utilize the best fit HOD parameters from I20 and predict the expected weak lensing signal for each of the stellar mass threshold lens samples in redshift bins 1 and 2. We use the framework prescribed in Section <ref> to predict the lensing signal. These best-fit predictions are shown as blue lines in Figs. <ref> and <ref> which correspond to redshift bins 1 and 2, respectively. In redshift bin 1, we find that the I20 predictions underestimate the measured lensing signal (by about 10-30%) around small projected radii corresponding to the 1-halo regime for threshold mass bins up to 10^10.0. For higher threshold bins, the predictions overestimate the measured weak lensing signal by up to 50-60%. Although we see qualitatively similar differences in redshift bin 2, the magnitude of these differences is much smaller than in redshift bin 1. For redshift bin 1, we show the best fit predictions for the weak lensing signal from the and model for M22 using light green and blue dotted lines, respectively. Both the SHAM models are able to explain the lensing signals relatively well for the galaxies in mass thresholds below 10^10 compared to more massive thresholds, especially when compared to the fits from I20. In this stellar mass range, the model seems to fit the measurements better than the model, however, we have checked that this is a resolution dependent statement, and with the higher resolution shin run, these differences further disappear. For higher threshold stellar masses, both models seem to fare poorly. However, we see evidence that the model is at least able to capture the large scale lensing signal beyond 1 well. For these bins, we see appreciable differences between the measurements and the predictions on small scales for both models.
The weak lensing signal in the 1-halo regime is driven by the average central halo masses . Therefore, we compare our inference of of each of the threshold sample with that inferred from the results of I20 and M22 in Fig. <ref>. The best-fit predicted average central halo masses from I20 are shown as blue (left panel) and red (right panel) lines for redshift bins 1 and 2 respectively. The comparison shows that the inferences from I20 are statistically larger than ours for M_*, limit>10^10.0, consistent with the expectation based on the comparison of the predicted and measured weak lensing signals. However, for stellar mass thresholds below 10^10.0, the I20 best fit predictions appear consistent with our constraints. This implies that such differences in the weak lensing signal are likely absorbed by the difference in satellite fractions in our model compared to that in I20. This is visible in the comparison of the satellite fractions from I20 with our results shown in Fig. <ref>. The comparison shows that when compared with the lensing only results, I20 prefer larger satellite fractions in both redshift bins.
In the left hand panel of Fig. <ref>, the results from M22 from the model and the model are shown with open squares and open circles with errors, respectively. We distinguish between the results from the two simulations used in M22 with two different colors, green corresponds to the mini simulation while magenta to the shin simulation. We see that both the models infer results which are consistent with our constraints from the weak lensing and abundance from I20 until a stellar mass threshold of 10^10, consistent with the comparison of the weak lensing signals. At higher stellar mass thresholds the differences seen in the weak lensing signal are a result of the higher average central masses in these models. The results for M22 seem to be much more consistent with the results from I20 at these threshold bins, suggesting that the combination with the clustering is driving the larger halo masses. In the comparison of the satellite fractions we also observe that the models from M22 always prefer higher satellite fractions compared to either I20 or our results, with the exact difference depending upon the resolution of the simulation. While comparing these results, it is worth keeping in mind that the scales over which the lensing measurements are carried out (< 5) are smaller than the length scales over which the clustering signal was measured by I20 (≲ 25-30 at the median redshifts of the samples). The inferences from clustering are thus expected to be more sensitive to the large scale bias of the dark matter halos or the 2-halo term, while our inferences rely more significantly on the 1-halo term. The signal on large scales can potentially be affected by the presence of galaxy assembly bias <cit.>, and thus appropriate caution is warranted.
The I20 best fit predictions of SMHM relation for each redshift bin are shown as red circles with errors in the two separate panels of Fig. <ref> for comparison with other studies. Despite the overestimate in for thresholds greater than 10^10, the halo masses are underestimated. As discussed in Section <ref>, such a relation between and can be made possible by choice of small values of halo mass scatters . And we verify in the middle panel of Fig. <ref> that indeed this is the case. Additionally, their and deviation from our constraints increase as we go in the direction of massive galaxy thresholds. Partly this could also be due to clustering information probing different degeneracy direction in degeneracy space of central HOD parameters. We highlight a contrasting feature between lensing and clustering based studies, that the I20 study of galaxy clustering, despite using abundance information which puts strong constraints on the central HOD parameters, is unable to strongly constrain the halo mass scatter parameter at high stellar mass thresholds whereas lensing is able to unveil the huge ambiguity in scatter parameter. This lack of constraint could be driving the disagreements between the two studies for thresholds beyond 10^10.0. Even though high mass slope of SMHM relation makes the stellar mass a poor tracer of its host halo mass <cit.>, lensing is clearly more effective in probing the scatter in SMHM relation than clustering. In the left hand panel for redshift bin 1 of Fig. <ref>, we observe that results of M22 (shown with similar color scheme as described before) for either of the and model are consistent with our results. We do see a difference between the results depending upon the resolution, and it appears that the two simulation boxes can trade between and the scatter so as to maintain similar value for . This can be seen in the right panel of Fig. <ref>, where we compare the scatter from M22 in the two different simulations and compare it to our results.
The best-fit constraint on the halo mass and scatter parameters from I20 are shown as points with 1-σ errors in left and middle panels of Fig. <ref>, where blue and red correspond to redshift bins 1 and 2, respectively. The underestimation of the WLS and average central halo mass at the lowest mass threshold of z_1 bin (see Figs. <ref> and <ref>) is caused by the correspondingly larger best fit value of . However, in redshift bin z_2 their best fit scatter value is in line with our expectation but then the underestimate of WLS is driven by the lower value of preferred by the clustering signal when combined with the abundance.
While we use the same cosmology as I20, we note the fact that differences in their modelling ingredients may have a non-negligible impact on this comparison. To be more specific, I20 uses large scale halo bias function and halo mass function each calibrated from different simulations, that is, bias from <cit.> but mass function as given by the Seth & Tormen halo mass function <cit.>. Also, I20 uses different halo mass-concentration relation than us, although we have an extra free parameter c_ fac which can subsume such differences. Similarly M22 use a halo mass definition that contains mass within the virial radius, M_ vir. We convert M_ vir to M_200m when making a direct comparison of halo masses using colossus <cit.>.
§ CHALLENGES AND FUTURE WORK: PHOTOMETRIC DATA AND ASTROPHYSICAL INFERENCES
We have inferred the galaxy stellar mass - halo mass scaling relation from a joint analysis of the abundance and weak lensing signal in this paper. The inferred relation assumes the lens galaxy properties given by the photometric redshift and stellar mass estimates from the template fitting method MIZUKI <cit.>. However, it is important to note that the presence of statistical or systematic errors in photometric redshifts can propagate in to the selection of our sample, as well as the measured abundances and the weak lensing signal, in a non-linear manner. As discussed in sec. <ref>, the errors in photometric redshifts are expected to positively correlate with those in the stellar masses <cit.>. Such correlated errors, even if they are only statistical, can result in a number of lower mass galaxies to scatter into our stellar mass threshold and some of the high mass galaxies to scatter out instead. Similar effects can also be at play at the redshift boundaries of our redshift bins. The stellar mass bin thus does not represent a true stellar mass threshold in the presence of such errors. Moreover, such errors are also expected to affect the true average redshift of the sample, as well as their abundance measurements. The abundance measurements are further complicated due to issues in the determination of the volume associated with the galaxies due to quality cuts on photometric redshift as applied in I20. In case such volume determination uncertainties affect galaxies at all stellar masses equally, then one could correct for such issues by comparing against prior determinations of the abundance in the literature. However in general, the selection effects are often much more nuanced than simple volume misestimates, and are not entirely straightforward to correct.
Even though we explored the effect of photometric redshifts of source galaxies on the weak lensing signal, these measurements can also be affected due to the uncertainties in the photometric redshifts of the lens galaxies. The lens galaxy redshift is used to assign the projected comoving impact parameter at which the light from background galaxies approaches the cluster before it reaches us. The critical surface density estimates used to convert the shear to the excess surface density also depends upon the redshift of the lens galaxies. Thus, the interpretation of the weak lensing signal can also be affected due to the use of photometric redshifts for the lenses. Therefore, each of the above mentioned measurements can impact the inferred HOD parameters in a variety of ways.
Given these uncertainties, we refrain from making direct comparisons between these results and those present in the literature on the stellar mass halo mass relation. We restrict our comparison to those studies which use the same sample of galaxies and have similar assumptions in order to make a fair comparison between the results of these studies with the results we obtain. In order to enable comparison with the broader literature, in a future study, we will use a forward modelling approach and ascertain the level of systematic bias by making use of mock galaxy catalogs that have the errors in photometric redshifts as expected from the photometry from the HSC survey.
The Subaru HSC survey can map out galaxies out to even higher redshifts than those considered in this study. However beyond the median redshift of the survey we become exceedingly sensitive to potential systematic biases due to the use of photometric redshift estimates of the source galaxies. We also expect that magnification bias to start to play a role by correlating the lens and the source number densities, especially for galaxies that lie at the steep end of the luminosity function <cit.>. Eventually, once we have a control over all the above systematics, it would become interesting to model the clustering, the lensing and the abundance of galaxies as a function of stellar mass and at multiple redshift bins.
§ SUMMARY AND CONCLUSIONS
We have investigated the galaxy-dark matter connection and its evolution using samples of photometric galaxies from the HSC survey with varying thresholds of stellar masses from 8.6 ≤log[ M_*/() ] ≤ 11.2 in the redshift ranges [0.30,0.55) and [0.55,0.80). Our results are based on the weak lensing signal measured for these samples using the Year 1 catalog of source galaxy shapes from the HSC survey, and the measurements of the abundance of galaxies. We carry out a Bayesian analysis to infer the posterior distribution of parameters that describe the halo occupation distribution of these galaxies. The key results and findings of our study are summarized as follows.
* We present high signal-to-noise ratio measurements (SNR ranging from 30-50) of the lensing signals in both redshift bins for all of our samples. We show the robustness of the measured lensing lensing signals with multiple null tests, such as the tangential and cross components of the lensing signal around random points and the cross component around lens galaxies. We also find that the boost factors for our signals are statistically insignificant and the biases due to use of photometric redshifts for the source galaxies are ∼ 1% and ∼ 4% for the redshift bins 1 and 2, respectively. These tests of systematics indicate that our measurements are not heavily affected by contamination of either the foreground or the background galaxies.
* We fit these weak lensing measurements together with the abundances of galaxies with a simple 5 parameter HOD model per sample in the context of the Planck cosmological model and show that the model provides a reasonable description of the data. We infer the posterior distribution of these parameters given the measurements.
* We show that the weak lensing measurements and the abundances on their own constrain the central HOD parameters and the scatter in a degenerate manner. However these degeneracy directions are different for each of the observables and hence a combination of the two helps break the degeneracy. We also show the impact of using different abundances in the literature. We show that the average halo masses of central galaxies are well constrained irrespective of the use of either abundances.
* We find that the average halo masses of central galaxies increases with the threshold for the stellar mass subsample for both redshift bin 1 and 2. Comparison between these scaling relations at the two different redshifts shows very mild evolution between these two redshifts, if any.
* We also compare our results with the constraints obtained by the study of I20 who jointly model the abundance and clustering of the same sample of galaxies. We show that the best fit model of I20 underestimates the observed lensing signals by varying amounts of 10%-30% in the 1-halo central term regime and 50%-60% at larger radii for mass thresholds up to 10^10 and overestimate the lensing signal for more massive threshold samples. Nevertheless, we find excellent agreement between the constraints on the average halo masses of central galaxies for these samples for thresholds until 10^10, and the results from I20 overestimate these average halo masses for higher threshold samples.
* We also compare our results with the subhalo abundance matching method of M22 which uses the abundance and clustering measurements of I20 as constraints. We find that their models which use a monotonic relation between or of the subhalos and the stellar mass of galaxies, is able to predict lensing signal consistent with our measurements for stellar mass thresholds up to 10^10. Both models fail to explain the lensing signal especially within the 1-halo regime for higher stellar mass threshold samples.
* Finally, we find that the satellite fractions predicted by our fiducal analysis are consistent with the clustering study of I20 given the statistical errors. However, we find that the models from M22 based on subhalo abundance matching predict an additional satellite fraction of up to 15% over our constraints.
The paper demonstrates the great potential of large imaging surveys such as the HSC to infer the galaxy-dark matter connection over a large range of redshifts using multiple observational probes such as the abundance of galaxies, their clustering and their galaxy-galaxy lensing signal. An accurate inference of the true underlying scaling relations between stellar mass and halo mass, however, will depend upon quantitative estimates of how the photometric redshift errors in the lens galaxy population affect the underlying stellar mass threshold samples. Assessment of the extent of such biases will be subject of our work in the near future.
§ ACKNOWLEDGEMENTS
We thank Divya Rana, Amit Kumar, Preetish K. Mishra, Susmita Adhikari, Arka Banerjee, Supranta S. Boruah and Priyanka Gawade for useful discussions and their comments on the draft version of the paper. We also thank our research advisory committee members Aseem Paranjape, Masamune Oguri, Anupreeta More for useful discussions on the current project along with comments on the draft version of this paper. NC is thankful for the financial support provided by the University Grants Commission (UGC) of India. He is also thankful to IUCAA for its the amicable environment and hospitality offered to students.
We acknowledge the use of Pegasus, the high performance computing facility at IUCAA. The calculations in part were carried out on Cray XC50 at Center for Computational Astrophysics, National Astronomical Observatory of Japan. Data analysis was in part carried out on the Multi-wavelength Data Analysis System operated by the Astronomy Data Center (ADC), National Astronomical Observatory of Japan.
This work was supported in part by JSPS KAKENHI Grant Numbers JP23K13145 (SI), JP19H00677, JP21H05465, JP22K03644 (S. Masaki) and JP21K13956 (DK).
TO acknowledges support from the Ministry of Science and Technology of Taiwan under Grant Nos. MOST 111-2112-M-001-061- and NSTC 112-2112-M-001-034- and the Career Development Award, Academia Sinica (AS-CDA-108-M02) for the period of 2019 to 2023.
The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from the Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University.
We also thank Instituto de Astrofisica de Andalucia (IAA-CSIC), Centro de Supercomputacion de Galicia (CESGA) and the Spanish academic and research network (RedIRIS) in Spain for hosting Uchuu DR1, DR2 and DR3 in the Skies & Universes site for cosmological simulations. The Uchuu simulations were carried out on Aterui II supercomputer at Center for Computational Astrophysics, CfCA, of National Astronomical Observatory of Japan, and the K computer at the RIKEN Advanced Institute for Computational Science. The Uchuu Data Releases efforts have made use of the skun@IAA_RedIRIS and skun6@IAA computer facilities managed by the IAA-CSIC in Spain (MICINN EU-Feder grant EQC2018-004366-P).
We have used <cit.> to create degeneracy plots and <cit.> to create triangle/corner plots.
§ DATA AVAILABILITY
The weak lensing signal measurements after applying all correction as mentioned in Section <ref> for our stellar mass threshold lens samples along with the measured covariance matrices and abundances are made available in a public github repository, <https://github.com/0Navin0/galaxy_halo_connection_in_HSC>. This repository also contains our modelling constraints from Tables <ref> and <ref> along with additional relevant plots for interested readers.
mnras
|
http://arxiv.org/abs/2307.05130v1 | 20230711091259 | EDGE: The direct link between mass growth history and the extended stellar haloes of the faintest dwarf galaxies | [
"Alex Goater",
"Justin I. Read",
"Noelia E. D. Noël",
"Matthew D. A. Orkney",
"Stacy Y. Kim",
"Martin P. Rey",
"Eric P. Andersson",
"Oscar Agertz",
"Andrew Pontzen",
"Roberta Vieliute",
"Dhairya Kataria",
"Kiah Jeneway"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage
Impact of the ^6Li asymptotic normalization constant onto α-induced reactions of astrophysical interest
S. Quaglioni
August 12, 2023
========================================================================================================
Ultra-faint dwarf galaxies (UFDs) are commonly found in close proximity to the Milky Way and other massive spiral galaxies. As such, their projected stellar ellipticity and extended light distributions are often thought to owe to tidal forces. In this paper, we study the projected stellar ellipticities and faint stellar outskirts of tidally isolated ultra-faints drawn from the `Engineering Dwarfs at Galaxy Formation’s Edge’ (EDGE) cosmological simulation suite. Despite their tidal isolation, our simulated dwarfs exhibit a wide range of projected ellipticities (0.03 < ε < 0.85), with many possessing anisotropic extended stellar haloes that mimic tidal tails, but owe instead to late-time accretion of lower mass companions. Furthermore, we find a strong causal relationship between ellipticity and formation time of an UFD, which is robust to a wide variation in the feedback model. We show that the distribution of projected ellipticities in our suite of simulated EDGE dwarfs matches well with that of 21 Local Group dwarf galaxies. Given the ellipticity in EDGE arises from an ex-situ accretion origin, the agreement in shape indicates the ellipticities of some observed dwarfs may also originate from a similar non-tidal scenario. The orbital parameters of these observed dwarfs further support that they are not currently tidally disrupting. If the baryonic content in these galaxies is still tidally intact, then the same may be true for their dark matter content, making these galaxies in our Local Group pristine laboratories for testing dark matter and galaxy formation models.
galaxies: dwarf; galaxies: formation; galaxies: stellar content; galaxies: structure
§ INTRODUCTION
The Local Group (hereafter LG) of galaxies offers an excellent laboratory to constrain the ΛCDM paradigm. The satellite systems orbiting the Milky Way (hereafter MW) allow us to investigate the processes and feedback effects governing galaxy formation and evolution in exquisite detail. In particular, the LG is host to the smallest galaxies known to date, the ultra-faint dwarfs (hereafter UFDs). With the faintest containing a few thousand stars and the brightest having a luminosity of ∼ 10^5 L_⊙, UFDs represent the extreme lower limit of the galaxy luminosity function <cit.>, carrying key evidence that can shed light on fundamental galactic processes <cit.>. These systems are the oldest, most chemically primitive <cit.> and most dark matter dominated <cit.> systems in the Universe and, hence, make excellent laboratories to constrain the nature of the mysterious dark matter.
The first major milestone in the search for UFDs was brought about by the advent of digital surveys, such as the Sloan Digital Sky Survey (SDSS), where searches for these faint systems in our LG were completed up to a surface brightness limit of 25.5 mag arcsec^-2 <cit.>. Thanks to deep imaging and spectroscopy with modern telescopes such as the Dark Energy Camera (DECam) <cit.>, the past decade has seen an explosion in the number of faint dwarf galaxies discovered, with 68 now known around the MW, 9 around M31 and 15 around galaxies beyond the LG. These newly discovered UFDs in our LG, with solar luminosities ≲ 10^5 L_⊙, have vastly improved our understanding of these systems <cit.>.
While UFDs offer the promise of unique constraints on galaxy formation models <cit.> and the nature of dark matter <cit.>, this is made more challenging by the potential impact of tidal forces from nearby host galaxies like the MW and M31 <cit.>. These larger systems can severely disrupt the environment of nearby smaller galaxies by stripping their dark matter content and then their stellar content, thus deforming the structure of the latter. Evidence for tides has been claimed for many nearby UFDs based on their extended light profiles <cit.>, apparent tidal features <cit.>, velocity gradients in their outermost stars <cit.> and/or constraints on their orbits <cit.>. However, the latest constraints on UFD orbits suggest that a number are tidally isolated at present <cit.>. A notable example is Tucana II, a LG UFD located at a distance of ∼58 kpc away from the MW <cit.>, which exhibits multiple features characteristic of tidal isolation <cit.>. <cit.> have recently reported member stars anisotropically distributed around Tucana II, up to nine half-light radii from its galactic centre. These member stars reach out to and even extend past our calculated tidal radius estimates of Tucana II, r_t≈ 0.76 kpc. This indicates that there are plenty of stars not of tidal origin in the region r_1/2 < R < r_t, where r_1/2 is the half-light radius.
Similar extended structures have also been discovered around Boötes I <cit.>, Ursa Major I <cit.>, Coma Berenices <cit.>, Ursa Minor <cit.>, Fornax <cit.>, Hercules <cit.>, and Sculptor <cit.>. One possible explanation for the presence of such anisotropic stars is dwarf-dwarf tidal interactions prior to infall <cit.>. However, it is interesting to ask whether such extended light profiles and apparent `tidal distortions' can occur via other means for tidally isolated systems. For example, <cit.> argue that galaxy mergers could explain the extended stellar halo around Tucana II.
<cit.> discerned how different galaxy models affect the contribution of accreted stars to dwarf galaxy haloes, finding that minor mergers hold the strongest clues for dwarf galaxy models.
Before the aforementioned discoveries, the existence of stellar haloes at these low mass scales remained inconclusive since they are thought to relate to early mergers that are less common as one goes down the mass scale of galaxies <cit.>. The mere existence of stellar haloes around dwarf galaxies helps to constrain the nature of galaxy formation and dark matter on the lowest mass scales.
In this paper, we study the ellipticity and extended light around tidally isolated UFDs drawn from the `Engineering Dwarfs at Galaxy Formation’s Edge (EDGE) project <cit.>. These tidally isolated galaxies are found to possess anisotropic and extended stellar outskirts, resembling the structure of tidal tails. The existence of these stellar haloes in the EDGE UFDs prompts us to look into their morphological origins, given that we intrinsically rule out the possibility of tidal isolation. Furthermore, we provide a comparison between the morphologies of the full suite of the EDGE simulations and an observed sample of UFDs, we use a Maximum Likelihood technique that was developed to calculate the observed structural parameters of dwarf galaxies <cit.>. If the projected ellipticities for the entire EDGE simulation suite reasonably match observed dwarfs, then given that the EDGE UFDs are designed to be isolated from other massive systems, it is possible that tidal features in local UFDs are not necessarily due to tides and instead originate from a non-tidal scenario.
This paper is organised as follows. In Section <ref>, we discuss the setup of the EDGE simulations, the prerequisites we place on the selection of observed data samples, as well as the methods we use to derive the EDGE structural parameters in the fashion of an observational astronomer. In Section <ref>, we describe the results obtained pertaining to the projected ellipticities of the EDGE simulations; the relationship between ellipticity and formation time, their comparison to observations, as well as the extended stellar light of the faintest galaxies. We then examine these results and look towards their implications, discussing the origin of stellar ellipticity in the EDGE UFDs. Finally, we draw conclusions in Section <ref>.
§ METHOD
§.§ EDGE Simulations
The suite of simulations examined in this work belongs to the EDGE project. These simulations were analysed using the tangos database package <cit.> and the pynbody analysis package <cit.>. A more comprehensive review of the simulations, and their underlying sub-grid physics, is found in <cit.>.
The EDGE project is designed to study isolated UFDs with halo mass 10^9 < M/M_⊙ < 5 × 10^9, in a simulated 50 Mpc void region. The simulations are initialised to assume cosmological parameters Ω_m = 0.309, Ω_Λ = 0.691, Ω_b = 0.045 and H_0 = 67.77 km s^-1 Mpc^-1, taken from the PLANCK satellite 2013 data release <cit.>. The volume is initially simulated at a 512^3 resolution, from z = 99 to z = 0. The largest void volume in this region is then selected and resimulated to a resolution of 2048^3, with the inclusion of an appropriate small scale power to the grid. Within this resimulated region, the HOP halo Finder <cit.> is implemented to find dark matter haloes at z = 0. Once a suitably isolated candidate has been confirmed, the halo is resimulated via the implementation of a zoom-in simulation technique <cit.>, up to redshift, z=0.
We approach a maximum spatial resolution of ∼3 pc in the hydrodynamic grid. This high spatial resolution allows for the accurate injection of energy from a supernova, thus reducing the need for mechanisms required to prevent over-cooling of the supernovae-heated gas (see e.g ).
The adaptive mesh refinement hydrodynamics code, ramses <cit.>, is used to model the evolution of both baryonic matter and dark matter. The baryonic physics model makes use of a Schmidt law <cit.> to describe star formation in cells of gas that satisfy the required temperature and density (see ). Initially, each stellar particle represents 300 M_⊙ and can be thought of as a mono-age stellar population described by a Chabrier initial mass function <cit.>.
The epoch of reionisation is modelled as a time-dependent uniform UV background at z = 8.5 <cit.>. The reader may refer to <cit.> for details on the specific implementation of this model.
§.§ Selection of observed candidates
To provide a clear comparison of the shape distribution of observed dwarfs to the shape distribution of the EDGE simulation suite, we collate a refined list of dwarf galaxies belonging to the MW, presented in Table <ref>. We place two stringent constraints when creating this sample of galaxies. The first constraint is related to the mass of the galaxy, where we only include observed dwarfs that have stellar masses within approximately one order of magnitude of the EDGE dwarfs. For the second constraint, we adopt a list of observed galaxies that are thought to be tidally undisturbed (Kim et al., in preparation), since the EDGE galaxies were specifically selected due to their isolation. This second requirement is possible to meet thanks to the latest orbits taken from Gaia EDR3 <cit.>, which <cit.> then combine with accurate photometry to determine the systemic proper motions, thus providing insight into their respective tidal interactions. It should be noted that we use the pericentre values including the influence of the LMC, to provide us with the most authentic orbital scenarios.
The metric for isolation of an UFD is categorised as a ratio between its tidal and half-light radius. However, the time variation of the tidal radius and its dependency on the mass distribution of both systems involved are usually poorly understood; hence the tidal radius remains ambiguously defined <cit.>. A solution for this uncertainty is to approximate the tidal radius as the position where the total force (from the satellite and host) matches the centrifugal acceleration needed to stay on the same orbit as the satellite. The tidal radius is given as follows <cit.>:
r_t = (m_dwarf/3M_MW)^1/3 d,
where r_t is the tidal radius, m_dwarf is the dwarf galaxy mass, M_MW is the MW mass enclosed within the dwarf orbital radius, and d is the distance between the dwarf and the galactic centre of the host system. It should be noted that Equation <ref> is only an approximation, as it assumes a point-mass approximation for the dwarf and the MW, purely radial motion between the dwarf and the MW, and stars within the dwarf moving on purely radial orbits <cit.>.
Following <cit.>, we define dwarfs to be “tidally isolated” if they have r_t/r_1/2 > 3. In practice, even these dwarfs are likely stripped to some degree. <cit.> recently showed using the FIRE simulations that many "intact" looking satellites have tidal tails, with ∼ 66% of a total 64 stream progenitors being mistaken for intact satellites. However, it will be challenging to detect any stripping from their extended light distributions. The only exception to this is dwarfs that interact with other dwarfs before infall to the MW. These can be on apparently benign orbits, where the galaxy shows no present-day sign of prior tidal interactions, despite having experienced significant stripping in the past <cit.>. At present, no method has been proposed for distinguishing such dwarfs from genuinely tidally isolated systems. As such, we must accept the possibility that some of our samples will be contaminated by such tidally affected systems. Nevertheless, in a ΛCDM cosmology, these are expected to be quite rare, with <cit.> stating that they find 9 out of 212 simulated luminous dwarfs as analogues of this scenario.
§.§ Structural parameters
To derive the structural parameters of the EDGE simulations, we employ a Maximum Likelihood technique similar to the one utilised by observational astronomers to uncover the structural parameters for dwarf galaxies in the SDSS <cit.> and PandAS <cit.> surveys[PandAS is an astronomical survey focused on the content and structure of M31 and M33 <cit.>.]. In this paper, the method is used in such a way as to treat the simulations as if they were two-dimensional observations projected onto the sky, so that the resemblance between the EDGE simulations and observational data can be accurately analysed.
Equation <ref> calculates the probability each star particle contributes to the structural parameters, given its respective position. As the stellar particles in EDGE are representative of a stellar mass on the order of 10^2 M_⊙, we can determine how much each stellar particle should contribute to the calculation of the structural parameters by weighting via the stellar mass.
l_i = 1.68^2N_*/2π r_1/2^2 (1-ε)exp(-1.68r_i/r_1/2) m_i/M_*,
where the relation between the half-light radius and the exponential scale radius of the profile is r_1/2≈ 1.68r_e, N_* is the number of stars in the sample, M_* is the total stellar mass, m_i is individual stellar mass, ε is the ellipticity defined as ε = 1 - b/a, with b/a as the minor-to-major-axis ratio of the system, θ is the position angle of the major axis, defined as East of North, r_1/2 is the half-light radius of its assumed exponential radial profile, and r_i is the elliptical radius. Here, r_i, is related to the spatial positions x_i and y_i as follows,
r_i = ((1/1-ε(x_icosθ - y_isinθ))^2+ (x_isinθ + y_icosθ)^2)^1/2
A more comprehensive mathematical approach may be found in <cit.>, where a similar exponential model is used to describe a low stellar density.
The total log-likelihood is calculated by taking the summation of all the logged individual probabilities,
logℒ = ∑_ilog l_i
We determine the most likely shape parameters (ε, θ, r_1/2) for each EDGE UFD with the emcee code <cit.>. This is a Markov Chain Monte Carlo (MCMC) method, and it is found to definitively converge upon the structural parameters of the galaxy when 50 walkers are run over a total of 600 steps. Similar to <cit.>, we place flat priors for the three parameters such that 0 ⩽ε < 1, θ is in an interval of 180 degrees, and r_1/2 > 0.
§.§ Cutting on surface brightness
Since one of our main aims is to compare simulations to observations, we attempt to replicate the same methods and techniques that observational astronomers use. Therefore, during the MCMC calculation we apply a surface brightness cut to the EDGE simulations. This creates the effect that our mock observations of simulations are limited by surface brightness, just as observations are through the use of telescopes.
Inspired by the literature, we place two different surface brightness cuts on the simulated UFDs. The first cut is at 25.5 mag arcsec^-2 since this was the surface brightness limit of the SDSS telescope <cit.> used to observe several MW dwarf galaxies in Table <ref>. We place the next surface brightness cut at 30 mag arcsec^-2 to highlight what we should be able to predict with the most modern detection instruments. The latter cut is a more optimistic approach inspired by the contemporary advances of observational astrophysics in recent years, i.e. the Dark Energy Survey (DES) <cit.>, the DECam Local Volume Exploration survey (DELVE) <cit.>, and within the next few years, the Vera C. Rubin Observatory <cit.>.
§.§ Gaussian KDE fitting method
The ellipticity we calculate with our Maximum Likelihood technique is the projected ellipticity of the UFD and not the true ellipticity. Accordingly, we orient each EDGE galaxy around 100 random viewing angles to create a probability distribution function (hereafter PDF) of projected ellipticities.
All our PDFs are created with a kernel density estimator (KDE), used to convolve our data with a Gaussian kernel. A certain degree of smoothing is employed to produce the PDFs of projected ellipticity for both EDGE and observations. We use Silverman's Rule to define the degree of this smoothing so that the PDF provides a match to the underlying data. With each PDF, we include a rug plot displaying the data points from which the distributions are constructed.
§ RESULTS
The GenetIC code <cit.> is utilised within the EDGE simulations to `genetically modify' the initial conditions for an EDGE UFD (t_form = 2.4 Gyrs) to form three unique variations at earlier (t_form = 2.8 Gyrs) and later times (t_form = 3.1 Gyrs, 3.6 Gyrs) <cit.>. We define formation time as the time when the galaxy has assembled 50% of its final mass at z=0.
<cit.> studied the mass accretion histories for these UFDs, and revealed that the later-forming variations assemble their stellar mass from late-time dry mergers. Such an assembly history leads to extremely low surface brightness and an increase in the half-light radius (i.e. an increase in the size of the galaxy). Conversely, the mass accretion history of the earlier-forming galaxy primarily consisted of stars that had formed in-situ, leading to a higher surface brightness and a decrease in the half-light radius.
§.§ EDGE projected ellipticity correlates with formation time
Figure <ref> displays the PDFs of projected ellipticity for the same fiducial UFD and three variations of this fiducial UFD at earlier and later times. The projected ellipticities here are taken at a surface brightness cut of 30 mag arcsec^-2.
A systematic shift from a lower projected ellipticity to a higher projected ellipticity is seen from the peak of the distributions. This shift in projected ellipticity correlates with the formation time of the UFDs, i.e. earlier assembly times have lower ellipticities, and later assembly times have greater ellipticities. Our findings represent the first evidence in support of a causal relationship between the time of main halo formation in an UFD and the galactic stellar ellipticity of an UFD.
To provide complete clarity, the ellipticity of the stellar content is directly related to the distribution of the stellar structure <cit.>. The distribution of the stellar content in UFDs will depend on whether star-forming gas is available. If it is, the majority of the UFD forms via in-situ star formation, and this leads to a rounder, more compact shape. However, if the gas escapes the UFD, no more star formation will occur, and the UFD will form via ex-situ mergers, leading to a fainter, more elliptical shape. Therefore, the ellipticity is determined by the method of formation of the stellar component, which is decided by the availability of gas in the galaxy. However, this availability of gas is decided by the formation time of the main halo and whether it is large enough at reionisation to retain its gas. And so, the formation time of the main halo dictates the availability of gas, which in turn decides whether the stars originate in-situ or ex-situ, i.e. where the ellipticity arises from.
Furthermore, we must now consider that this relationship is affected by how much stellar mass forms in-situ in the centre of the UFD, and this becomes very sensitive to the choice of feedback model. A model with less feedback will produce a lot more in-situ stars, as less feedback implies an increase in star-forming gas. There will also be an increase in the late-time accreted component of the stellar content, however, it becomes uncertain how our relationship will now follow with ellipticity.
Figure <ref> displays the PDFs of projected ellipticity at a surface brightness cut of 30 mag arcsec^-2 for the four variations of the UFD in Figure <ref>, now implemented with a `weak-feedback' model. This model artificially limits the efficiency of the supernovae wind driving by placing numerical limits on the maximum supernovae gas temperatures and velocities (see ).
Once again, we see a clear systematic shift from a lower projected ellipticity to a higher projected ellipticity from the peaks of the distributions. This shift in projected ellipticity correlates with the formation time of the UFDs going from lower ellipticities at earlier assembly times to higher ellipticities at later assembly times. Therefore, even if our feedback model does not provide an absolute description of reality, under the regime of this `weak feedback' model, the relative ordering of ellipticity with mass growth histories stands. Our relationship is robust to the feedback physics we use, and the stars at large radii that make up these extended tails are found within two different feedback models, making them a strong prediction from our simulations.
We observe that the PDFs shown in Figure <ref> have a great deal of overlap, calculating a 12% chance to mistake the projected ellipticity of the earlier-forming UFD with the latest-forming UFD. Meanwhile, there is only a 2% chance to mistake the projected ellipticity of the earlier-forming UFD with the latest-forming UFD, from the PDFs shown in Figure <ref>.
§.§ Projected ellipticity in EDGE vs observed dwarf galaxies
The PDFs of the projected ellipticities for the EDGE simulation suite are shown in Figure <ref>. Here, we use the full suite of the EDGE simulations to make our comparison with observations. To see a tabulated summary of the 10 EDGE UFDs, please refer to <cit.>. We note that only five of these ten UFDs are unique realisations; we construct the additional five UFDs from genetic modifications applied to the original five.
The EDGE PDFs are given at surface brightness cuts of 25.5 mag arcsec^-2 (blue dashed) and 30 mag arcsec^-2 (red dotted). Also shown in Figure <ref> is the PDF of the projected ellipticities created from the observed ellipticity samples (black) in Table <ref>, along with the respective confidence intervals at 68% (1σ) and 95% (2σ) variance, calculated from the uncertainties of the observed samples.
From the newly discovered relationship between an UFDs formation time and ellipticity, we know that our more elliptical galaxies are later forming and the less elliptical galaxies are earlier forming. Studying the populous of haloes in the EDGE volume, it becomes clear that the haloes forming earlier and later are significantly rarer. Therefore, when choosing to weight our EDGE PDFs of projected ellipticity, we do so via their formation times opposed to stellar mass. We assume that formation time is the dominant variable beyond mass when it comes to the ellipticity of these galaxies. To weight the EDGE PDFs by formation time, we use our probability density function of haloes in the EDGE volume to weight the contribution of each EDGE UFD to the shape distribution. We note that we only include the formation times of systems with similarly sized halo masses to those of our EDGE samples. This weighting ensures that the EDGE PDFs are not biased by formation time. The UFDs with a more common formation time will contribute more to the EDGE PDFs, and the UFDs with a rarer formation time will contribute less to the EDGE PDFs, thus creating a more accurate comparison to the PDF from our sample of observed dwarfs.
The comparison between EDGE PDFs presented in Figure <ref> displays the peak of the EDGE distribution shifting towards higher ellipticities at a fainter surface brightness cut of 30 mag arcsec^-2. As these galaxies are viewed out to a fainter surface brightness limit, the number of stars increases in our `observations' of the EDGE UFDs. Therefore, the increase in projected ellipticity confirms that these galaxies have an elongated stellar distribution along one axis.
It is important to highlight the inhomogeneity of surface brightness limits the galaxies in our sample were observed at. A number of ellipticity measurements were taken at a surface brightness limit ∼ 25.5 mag arcsec^-2, but for some galaxies observed more recently, the surface brightness limits extend up to ∼ 30 mag arcsec^-2, thanks to improved detection instruments. Following the above-mentioned relation in EDGE, if some galaxies were observed out to a fainter surface brightness limit, they would sit at higher ellipticities on the observed PDF, and this could explain the existence of the second smaller peak in the observed distribution.
The projected ellipticities of EDGE galaxies viewed at a more luminous surface brightness limit of 25.5 mag arcsec^-2 provide good agreement with a significant number of dwarf galaxies from the observed sample, with one of the peaks of the blue EDGE distribution laying well within the overlap of the larger observed peak. We also see a good agreement between the smaller peak of observed galaxies and the fainter distribution of EDGE galaxies at 30 mag arcsec^-2. These agreements point towards a relation between the shape of the EDGE UFDs and the sample of observed dwarfs up to a surface brightness limit of 30 mag arcsec^-2. As our EDGE UFDs are tidally isolated, this suggests a number of the galaxies in our sample of observed dwarfs have ellipticities that are perhaps unduly attributed to tides. This possible tidal isolation is further reinforced by the tidal radii we calculate from the large pericentres of the observed sample.
§.§.§ A cautionary note on tidal effects in dwarf galaxies
Following our discussion in Section <ref>, even though we make an effort to try to reduce the number of tidally affected galaxies in Table <ref>, there is still the possibility that tides may have some influence on observations, and even these dwarfs in our sample are likely stripped to some degree. For example, <cit.> report from their simulations that more than 50% of satellites have tidal tails at distances 50-200 kpc from their hosts. Consequently, this may be the reason why we see the PDF of observed galaxies skew to higher projected ellipticities in Figure <ref>, where we notice from the underlying rug plot that there is a handful of galaxies lying beyond the second peak in the observed distribution.
Alternatively, we also have to account that the present-day orbit does not always provide the complete history of a galaxy. Given the density of the environment that these dwarfs live in, it is plausible they experienced previous interactions with close-by dwarf galaxies such as the LMC or even with each other, which may yet be another reason explaining the high ellipticity population in our observed sample. Using the example of Fornax, <cit.> show that systems appearing to be tidally isolated today can have had significant galaxy-galaxy interactions in the past. Analogues of this scenario appearing in our observed sample would shift the distribution of observed galaxies to a more elliptical peak. However, as previously stated in Section <ref>, this scenario is expected to be quite rare, with <cit.> finding a <5% chance of a Fornax analogue from their sample of 212 simulated dwarfs.
§.§ Origin of extended stellar content in EDGE
Figure <ref> represents the present-day spatial distributions for the same UFD in Figure <ref>, along with its three variations with altered formation times, and their underlying surface brightness maps. The surface brightness maps were created with a kernel smoothing scheme using the py-sphviewer package <cit.>.
As these are projected spatial distributions, they are susceptible to the two-dimensional effect of appearing seemingly less elliptical than their true galactic shapes imply. To counter this, we ensured that the stellar contents of all the galaxies were observed from their most elliptical viewing angle.
The first key result that stands out when observing the stellar content in Figure <ref> is the elongation that the systems possess along one axis. This elongation coincides with the stars coloured cyan, which represent stars that come from a late-time dry accretion origin. The stars coloured grey, on the other hand, are stars that form from an in-situ origin and we see these are much more central in the stellar distribution.
While the UFDs in the bottom two panels of Figure <ref> are the most elliptical in terms of the overall shape of the galactic centre, the extended stellar distributions of the top two panels are still very clearly discerned upon inspection.
The centres of the two UFDs residing in the top panels of Figure <ref> are not as elliptical, however, these galaxies still possess this extended distribution feature, with detritus of member stars located up to ∼ 78 half-light radii from the centre of the original galaxy and ∼ 56 half-light radii from the centre of the earlier-forming galaxy. To provide a visual reference for these large half-light radii, we include a white dashed circle in our spatial distributions, for which the radial distance from the centre represents ten half-light radii. From the surface brightness maps, we see that these outer stars at these distances will become observable at a surface brightness ≳ 34 mag arcsec^-2. We note how great the radial distance to ten half-light radii is for the later-forming galaxies, implying how faint they are compared to the earlier-forming UFDs.
The original UFD in the upper left panel of Figure <ref> has an apparent elongated distribution of stars and possesses a much larger spread of stars than either of the later-forming galaxies. This original galaxy has an ex-situ mass fraction of 2.96%.
The earlier-forming UFD in the upper right panel is much rounder at the centre, which is supported by the low projected ellipticity distribution shown in Figure <ref>. The earlier-forming galaxy has an ex-situ mass fraction of 1.05%, making it the UFD with the lowest amount of ex-situ content compared to the other three.
The later-forming UFD in the bottom left panel has less of an extended distribution than the original and early-forming UFDs, however, its galactic centre is more elliptical. The later-forming galaxy has an ex-situ mass fraction of 7.31%.
Finally, the latest-forming UFD in the bottom right panel is strongly elongated, with the most elliptical centre out of the four UFDs. The latest-forming UFD has the highest content of ex-situ stars with a mass fraction of 95.8%, given that practically all stellar content originates from ex-situ dry accretions. Also, the overall stellar mass for this galaxy is an order of magnitude smaller than the other three UFDs. Justified from the peaks of the PDFs in Figure <ref>, we can definitively say that the latest-forming UFD has the most elliptical galactic centre. These PDFs consist of ellipticities taken at a surface brightness cut of 30 mag arcsec^-2, meaning they exclude the majority of the extended stellar outskirts.
We trace back the origin of the stars in the EDGE UFDs and find that those coloured cyan in Figure <ref> accrete onto the systems through late-time dry mergers and make up the extended part of the stellar distribution. The stellar ellipticities in the EDGE UFDs thus originate from these late-time accretion events of smaller haloes that had their gas quenched by reionisation before they could merge onto the main halo. The bottom two panels of Figure <ref>, showing the later-forming galaxies, exemplify this as they assemble primarily through late-time dry accretion and have extended distributions of stars along one axis around the galactic centres, giving them systematically larger ellipticities.
On the other hand, the earlier-forming UFDs form their stellar content in-situ and have less elliptical galactic centres, with systematically lower ellipticities. However, in the spatial distributions of the two earlier-forming galaxies, there is still a distribution of stellar material coloured cyan, extending even further than the elongations depicted in the later-forming galaxies. Therefore, we show that even though these galaxies formed earlier, they still have elongated stellar content originating from the late-time accretion events of smaller haloes. Such features clarify that the extended starlight in EDGE is a natal characteristic of these ultra-faint systems, and these cyan coloured stars from late-time dry accretion events are the origin of this stellar ellipticity. For the first time, we show that the accretion mechanism giving rise to the extended shapes emerges naturally in a fully cosmological context, where several dark matter sub-haloes undergo hierarchical minor mergers following ΛCDM cosmology. And the existence of these stellar haloes around the smallest galaxies in the Universe constrains ΛCDM at the smallest scales.
§ CONCLUSIONS
Within our local volume of the Universe, we find a multitude of the faintest, most dark matter dominated galaxies. Given that these UFDs are scattered around more massive systems, it is logical to assume that these systems are extremely prone to tidal effects. Thus, the large projected ellipticities of the stellar distribution and the stellar detritus of these systems can easily be attributed to tidal deformation.
Utilising the EDGE simulations, we have shown that despite their tidal isolation, our simulated dwarfs exhibit anisotropic extended stellar outskirts that masquerade as tidal tails but are instead natal, owing to the origin of a late-time dry accretion assembly. Furthermore, we revealed that UFDs with later formation times have more elliptical stellar distributions, thus establishing a novel connection between the shape of an UFD and its respective formation time. This newly discovered relationship was robust to a wide variation in the feedback model, making it a strong prediction from our simulations. The above-mentioned results extend the conclusion found in <cit.>, in which the authors discovered that UFDs with later formation times have an extremely low surface brightness and a much larger stellar size. These discoveries within the EDGE fully cosmological context are vital in assessing the extent to which the smallest galaxies in the Universe have mechanisms and features indistinguishable from more massive galaxies, such as the MW, with respective fossil records detailing their history.
We studied the projected stellar ellipticities of 10 isolated UFDs in the EDGE cosmological simulation suite by implementing a well-known observational method to calculate structural parameters. The Maximum Likelihood technique, developed and employed by observational astronomers <cit.>, allowed us to make direct comparisons concerning the projected ellipticity of our simulations, contrasted to a refined sample of observed MW dwarf galaxies in our LG.
We sampled projected ellipticities from around 100 random viewing angles for each EDGE UFD to acquire a representative distribution of orientations. Observing the EDGE UFDs out to fainter surface brightness, we noticed an increase in projected ellipticity. Therefore, if UFDs have extended stellar distributions in reality, we should expect a similar increase in the projected ellipticity of known UFDs when these galaxies are further uncovered with deeper and better-resolved spectra of their surrounding stars.
The PDFs of projected ellipticity for EDGE and observations displayed a good agreement, given that peaks of their distributions lay comfortably within the confidence intervals. This agreement implies our simulated EDGE UFDs resemble the shapes of a number of faint dwarfs belonging to the MW. As the EDGE UFDs are designed to be isolated from more massive systems, it is possible that tidal features in these observed dwarfs are not necessarily due to tides but instead originate from a non-tidal scenario. This theorised isolation of our select sample of MW dwarf galaxies is reinforced by their newfound orbital parameters in <cit.>.
If a significant number of the nearby UFDs in our LG are tidally intact, as our results may suggest, their baryonic and dark matter contents could remain uninfluenced from more massive systems in their surrounding environments. Therefore, many LG UFDs would serve as excellent natural laboratories for probing dark matter and galaxy formation physics.
§ ACKNOWLEDGEMENTS
This work was performed using the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The equipment was funded by BEIS capital funding via STFC capital grants ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grant ST/R001014/1. DiRAC is part of the National e-Infrastructure. OA and EA acknowledge financial support from the Knut and Alice Wallenberg Foundation and the Swedish Research Council (grant 2019-04659). EA also acknowledges support from the US NSF (grant AST18-1546).
§ DATA AVAILABILITY
Data available upon reasonable request.
mnras
|
http://arxiv.org/abs/2307.07567v1 | 20230714181930 | Diverse Approximations for Monotone Submodular Maximization Problems with a Matroid Constraint | [
"Anh Viet Do",
"Mingyu Guo",
"Aneta Neumann",
"Frank Neumann"
] | cs.DS | [
"cs.DS"
] |
Diverse Approximations for Monotone Submodular Maximization Problems with a Matroid Constraint
Anh Viet Do Mingyu Guo Aneta Neumann Frank Neumann
Optimisation and Logistics, School of Computer and Mathematical Sciences, The University of Adelaide
=============================================================================================================================================================
Finding diverse solutions to optimization problems has been of practical interest for several decades, and recently enjoyed increasing attention in research. While submodular optimization has been rigorously studied in many fields, its diverse solutions extension has not. In this study, we consider the most basic variants of submodular optimization, and propose two simple greedy algorithms, which are known to be effective at maximizing monotone submodular functions. These are equipped with parameters that control the trade-off between objective and diversity. Our theoretical contribution shows their approximation guarantees in both objective value and diversity, as functions of their respective parameters. Our experimental investigation with maximum vertex coverage instances demonstrates their empirical differences in terms of objective-diversity trade-offs.
§ INTRODUCTION
Optimization research has seen rising interest in diverse solutions problems, where multiple maximally distinct solutions of high quality are sought instead of a single solution <cit.>. This class of problem is motivated by practical issues largely overlooked in traditional optimization. Having diverse solutions gives resilient backups in response to changes in the problems rendering the current solution undesirable. It also gives the users the flexibility to correct for gaps between the problem models and real-world settings, typically caused by estimation errors, or aspects of the problem that cannot be formulated precisely <cit.>. Furthermore, diverse solution sets contain rich information about the problem instance by virtue of being diverse, which helps augment decision making capabilities. While there are methods to enumerate high quality solutions, having too many overwhelms the decision makers <cit.>, and a small, diverse subset can be more useful. It is also known that k-best enumeration tends to yield highly similar solutions, motivating the use of diversification mechanisms <cit.>.
The diverse solutions problem have been studied as an extension to many important and difficult problems. Some examples of fundamental problems include constraint satisfaction and optimization problems <cit.>, SAT and answer set problem <cit.>, and mixed integer programming paradigms <cit.>. More recently, the first provably fixed-parameter tractable algorithms have been proposed for diverse solutions to a number of graph-based vertex problems <cit.>, as motivated by the complexity of finding multiple high performing solutions. This inspired subsequent research on other combinatorial structures such as trees, paths <cit.>, matching <cit.>, independent sets <cit.>, and linear orders <cit.>. Furthermore, general frameworks have been proposed for diverse solutions to any combinatorial problem <cit.>. To address the need to obtain both quality and diversity, multicriteria optimization has been considered, leading to interesting results <cit.>. These are mostly applied to problems with linear objective functions and specific matroid intersection constraints.
In this work, we are interested in diverse solutions problem in the domain of submodular optimization, which has been enjoying widespread interests. It captures the diminishing returns property that arises in many real-world problems in machine learning, signal processing <cit.>, sensor placement <cit.>, data summarization <cit.>, influence maximization <cit.>, to name a few. Moreover, its hardness (as it generalizes many fundamental NP-hard combinatorial problems) and well-structuredness (which facilitates meaningful results <cit.>) mean the problem class also sees much attention from theoretical perspectives, leading to interesting insights <cit.>. It is important to distinguish between the diverse solutions extension to submodular optimization and results diversification <cit.>, the latter of which considers diversity as a measure of a solution (i.e. a selection of results) and optimizes it along with a submodular utility function.
Our Contributions We investigate the problem of finding a given number of diverse solutions to maximizing a monotone submodular function over a matroid, with a lower bound on solutions' objective values. Matroids are a type of independence system that can be used to model constraints in many important problems, and have been studied in in submodular optimization literature <cit.>, even recently appeared in diverse solutions research <cit.>. Among them, uniform matroids which characterize cardinality constraints, and their extension, partition matroids, are often considered in budgeted optimization (e.g. <cit.>). We consider the distance-sum measure of diversity, which is often chosen for diverse solutions problems <cit.>. Its sole reliance on the ground set elements' representation in the solution set implies generalizability to other diversity measures such as entropy. Our contributions are as follows:
* We propose two simple greedy algorithms which are suitable to deal with the objective requirement, as greedy algorithms are known to perform well on monotone submodular maximization <cit.>. The novelty lies in the additional parameters, which adjust the trade-off between guarantees on objective values and diversity. We position our algorithms as simpler, zeroth-order (in terms of objective and independence oracles) alternatives to general frameworks for diverse solutions in recent literature, which have not been analyzed in submodular optimization context.
* We provide analyses of these algorithms in terms of their objective-diversity guarantees trade-offs. Our results are formulated as functions of their respective parameters, thus giving a general guidance on parameter selection. We also give sharpened bounds for cases with uniform matroids, as motivated by the prevalence of cardinality constraints. From these results, we point out settings that guarantee constant approximation ratios in objective, diversity, or both. Our tightness constructions also indicate certain features of matroids that make them pathological to these algorithms.
* We carry out an experimental investigation with maximum vertex coverage instances subjected to uniform and partition matroid constraints, to observe the algorithms' empirical performances in exhaustive parameter settings. The results indicate that while both algorithms produce nearly optimal solutions with reasonable diversity in many parameter settings, the simpler of the two actually provides better objective-diversity trade-offs across all problem settings. Additionally, these establish an empirical baseline for the diverse solutions problem considered in this work.
§ PRELIMINARIES
In this section, we present the problem and relevant definitions, and give some observations that are helpful in our analyses.
§.§ Problem and Definitions
A multiset is a collection that can contain duplicates (e.g. {1,1,2}). For a set A, we denote the collection of multisets of elements in A with A^*, and A^r⊆ A^* contains r-size[In this work, we use “r-size” to mean “containing r elements”.] multisets for some integer r. The problem we investigate is as follows: given integer r≥2, α∈[0,1], a (f,S,d,r,α)-instance asks for a multiset[Satisfying self-avoiding constraint requires algorithmic treatment beyond this work's scope.] of solutions in
_P∈ S^r{ d(P):∀ x∈ P,f(x)≥αmax_y∈ Sf(y)}.
where the objective function, f:2^V→ℝ is non-negative[The non-negativity assumption is widely used in literature to ensure proper contexts for multiplicative approximation guarantees, which this work includes. This also applies to diversity w.l.o.g.] and non-decreasing submodular, S=ℐ for some matroid M=(V,ℐ), and d is a diversity measuring function defined over (2^V)^*. We do not consider non-increasing f due to trivial instances where achieving any positive diversity[Assuming the diversity measure returns 0 on duplicate-only multisets.] necessitates degrading solutions beyond the feasibility limit. As per standard practice, we use “monotone” to mean “non-decreasing” in this paper. We call a multiset P feasible to the (f,S,d,r,α)-instance if P∈ S^r and every solution in P is a α-approximation of f over S, which is a solution x∈ S such that f(x)≥αmax_y∈ Sf(y). We also briefly give relevant definitions and assumptions.
Function f:2^V→ℝ is monotone if f(x)≤ f(y) for all x⊆ y⊆ V.
Function f:2^V→ℝ is submodular if ∀ x,y⊆ V,f(x)+f(y)≥ f(x∪ y)+f(x∩ y) or equivalently ∀ x⊆ y⊆ V,v∈ V∖ y,f(x∪{v})-f(x)≥ f(y∪{v})-f(y).
For problem (<ref>), we assume w.l.o.g. that f(∅)=0, since a multiset feasible to a (f,S,d,r,α)-instance is also feasible to the (f+f',S,d,r,α)-instance for some constant non-negative function f'. We assume for our problem that f is given as a value oracle.
For matroid theory concepts, we adopt terminologies from the well-known text book <cit.> on the subject.
A tuple M=(V,ℐ⊆2^V) is a matroid if [label=*)]
* ∅∈ℐ,
* ∀ x⊆ y⊆ V, y∈ℐ x∈ℐ,
* ∀ x,y∈ℐ,|x|<|y|∃ e∈ y∖ x,x∪{e}∈ℐ. The set V is the ground set, and ℐ is the independence collection. A base of M is a maximal set in ℐ.
Given a matroid M=(V,ℐ),
* the rank function of M, r_M:2^V→ℕ, is defined as r_M(x)=max{|y|:y∈ 2^x∩ℐ}, and the rank of M is r_M=r_M(V),
* the closure function of M, cl_M:2^V→2^V, is defined as cl_M(x)={v∈ V:r_M(x∪{v})=r_M(x)},
* a loop of M is a v∈ V such that {v}∉ℐ.
To give examples, a K-rank uniform matroid over V admits the independence collection ℐ={x⊂ V:|x|≤ K} which we denote with 𝒰_V,K. A partition matroid admits the independence collection ℐ={x⊂ V:∀ i=1,…,k,|x∩ B_i|≤ d_i} for some partitioning {B_i}_i=1^k of V and their corresponding thresholds {d_i}_i=1^k. In graph theory, a graphic matroid M=(E,ℐ) defined over a undirected graph G=(V,E) is such that ℐ contains all edge sets x where G'=(V,x) has no cycle. A base of a graphic matroid is a spanning forest in the underlying graph, which itself is an object of much interest. Dual to the graphic matroid, the bond matroid M^*=(E,ℐ^*) is such that ℐ^* contains all edge sets x where G^*=(V,E∖ x) has the same number of connected components as G.
For the problem (<ref>), we assume that M is loop-free and |V|≥1, implying r_M>0. It is known that rank functions are monotone submodular, and closure functions are monotone, i.e. x⊆ y cl_M(x)⊆ cl_M(y) <cit.>. We also assume that for a matroid, we are given an independence oracle answering whether a set is independent.
Finally, we consider the distance-sum diversity function, which is the usual choice in literature on diverse solutions problems <cit.>. The function is defined over multisets of solutions as ss(P)=∑_x,y∈ P|xΔ y| where Δ is the symmetric difference between two sets, and its size is the Hamming distance. To be precise, each pairwise distance is counted once in an evaluation of ss.
Under this setting, the problem (<ref>) is equivalent to the dispersion problem over the ground set that is the collection of all α-approximations of f over ℐ. The dispersion problem is known to be NP-hard in the ground set's size, even with known ground sets and metric distance functions <cit.>; for our problem, the collection is neither known nor necessarily small. On the other hand, <cit.> showed that this problem admits a poly-time max{1-2/r,1/2}-approximation scheme, predicated on a poly-time top-r enumeration scheme over this collection maximizing ss. We are not aware of such a scheme for α-approximations to submodular maximization over a matroid, and we recognize this as an interesting problem in its own right. That said, it is likely that algorithms resulted from this line of ideas will have significantly larger asymptotic run-times than those of the algorithms we present in this work.
§.§ Some Useful Properties
First, we observe that the value of ss is related to the occurrences of each elements of V in the multiset. Let P be a r-size multiset of subsets of V, and for i=1,…,|V|, n_i(P)=|{x∈ P: i∈ x}|, we have
ss(P)=∑_i∈ Vn_i(P)[r-n_i(P)].
This means the function can be decomposed into disjoint subsets of V: given a partitioning {V_i}_i=1^k of V, we have ss(P)=∑_i=1^kss({x∩ V_i: x∈ P}). This property can significantly simplify analyses.
We would also like to bound the maximum achievable diversity in various settings. While without the constraint on the values of f, this bound can be computed (over a matroid) exactly and efficiently, e.g. by using the method in <cit.>, estimating it with a formula can be useful. To this end, we define a function g:ℕ^3→ℕ with
g(a,b,c)=aq(c-q)+m(c-2q-1),
where h=min{b,a/2}, and m∈[0,a),q are integers such that ⌈ c/2⌉⌈ h⌉+⌊ c/2⌋⌊ h⌋=qa+m. This function returns the maximum ss values of a c-size multisets of at most b-size subsets of a a-size ground set (Theorem <ref>). Also, we let g(0,·,·)=g(·,0,·)=g(·,·,1)=0. For convenience, let δ:ℕ^2→ℕ be defined with δ(a,b)=a-2b-1, we have
∀ x∈ P,e∈ V∖ x, ss(P∖{x}∪{x∪{e}})-ss(P)=|P|-2n_e(P)-1=δ(|P|,n_e(P)).
This expression exposes the connection between g and the process of adding elements into solutions in P, which is relevant to the algorithms we consider in this work. That is, we can rewrite g using δ: g(a,b,c)=a∑_i=0^q-1δ(c,i)+mδ(c,q); this simplifies the proof of its monotonicity.
[][category=gmonotone,end]
Function value g(a,b,c) is monotone in a, b and c.
Let G(a,b,c) be the multiset of values of the summands δ in g(a,b,c). For any a'>a, we have |G(a,b,c)|≤|G(a',b,c)| and a bijection g':G(a,b,c)→ G'⊆ G(a',b,c) such that d≤ g'(d) for all d∈ G(a,b,c). This implies g(a,b,c)≤ g(a',b,c). For any b'>b, we have G(a,b,c)⊆ G(a,b',c), so g(a,b,c)≤ g(a,b',c). For any c'>c, we have G(a,b,c)⊆ G(a,b,c') and a bijection g':G(a,b,c)→ G'⊆ G(a,b,c') such that d=g'(d)-c'+c for all d∈ G(a,b,c), so g(a,b,c)<g(a,b,c').
Here, we include an inequality which gives an intuitive bound of a result in Section <ref>.
[][category=gsumboundratio,end]
Given integers a,b,c≥1 and k≥0, g(⌈ ka/b⌉,k,c)≥ kg(a,b,c)/y.
Let G(a,b,c) be the multiset of values of the summands δ in g(a,b,c), we have |G(a,b,c)|≤ k|G(⌈ ka/b⌉,k,c)|/y. Let S(a,b,c) be the set derived from G(a,b,c) (i.e. all duplicates removed), we have S(⌈ ka/b⌉,k,c)⊆ S(a,b,c) and ∀ a∈ S(a,b,c)∖ S(⌈ ka/b⌉,k,c),a<min{b∈ S(⌈ ka/b⌉,k,c)}. Furthermore, every element in S(⌈ ka/b⌉,k,c) except the minimum has multiplicity in G(⌈ ka/b⌉,k,c) equals ⌈ ka/b⌉/a times its multiplicity in G(a,b,c). Therefore, the sum of elements in G(⌈ ka/b⌉,k,c) is at least k/b the sum of elements in G(a,b,c), hence the claim.
To establish an upper bound on diversity, we use the following straightforward observation from the fact that uniform matroid constraints are the least restrictive.
Given a set V, function f over 2^V, matroids M=(V,ℐ) and M'=(V,ℐ') where M is uniform and r_M≥ r_M', then the optimal value for the (f,ℐ',d,r,0)-instance cannot exceed that for the (f,ℐ,d,r,0)-instance with any r≥1, and real function d over (2^V)^*.
With this, we can use uniform matroids to formulate a simple upper bound, which is also tight for some non-uniform matroids and, surprisingly, any value of the threshold ratio α.
[][category=matroidoptimumbound,end]
The optimal value for a (f,ℐ,ss,r,α)-instance for some matroid M=(V,ℐ), function f over 2^V, integer r≥1, and α∈[0,1] is at most g(|V|,r_M,r). Moreover, this bound is tight for any |V|≥1, r≥1, α∈[0,1], and matroid rank r_M∈[1,|V|], even if the matroid is non-uniform.
By Equation (<ref>), ss(P) is the sum of negative quadratic functions of n_i(P). Each of these summands is maximized at r/2, so assuming M is uniform and α=0, ss(P) is maximized if every solution in P contains h=min{r_M,|V|/2} elements, since it cannot exceed r_M elements. Therefore, the maximum ss(P) under this assumption is g(|V|,r_M,r). According to Observation <ref>, this assumption is ideal, so this bound cannot be exceeded by any feasible set P under any matroid M and α∈[0,1].
We prove tightness by construction. Given integers n≥1, s∈[1,n], r≥2, let x=(x_i)_i=1^n be the characteristic vector of subsets of V, m∈[0,s),q be integers where n=qs+m, f(x)=∑_k=0^q∑_j=1^mc_jx_ks+j where c_i is a non-negative real for all i=1,…,m, and a s-rank matroid M=(V,ℐ) where ℐ={x⊆ V:|x|≤ s∧∀ h∈[1,m],∑_i=0^sx_qi+h≤1}. We have OPT=max_x∈ℐ{f(x)}=∑_j=1^mc_j. Let P be a r-size multiset {{i}_i=(j q+1)s+1^(j q+1)s+m∪{i}_i=(j q)s+m+1^(j q)s+s}_j=1^r, we have ss(P)=g(|V|,s,r) and for all x∈ P, f(x)=OPT and x∈ℐ. Thus, P is optimal for any α∈[0,1].
In augment-type algorithms like greedy, how the feasible selection pool for a partial solution (i.e. set of elements that can be added without violating constraints) changes over the course of the algorithm influences the guaranteed quality of the final output. This insight was made evident in seminal works on greedy algorithms <cit.>, and is replicated in subsequent works on submodular optimization under more complex constraints. This is especially important in diverse solutions, as high diversity can be seen as additional restrictions on the selection pool. In the context of matroid constraint, this pool is determined by the partial solution's closure, thus we include an observation connecting closures to the upper bound on diversity.
[][category=matroidfreedomlimit,end]
Let M=(V,ℐ) be a matroid (may contain loops), and x∈ℐ, then for all y∈ℐ, |y∩ cl_M(x)|≤|x|. By extension, |y∩ z|≤ r_M(z) for all z⊆ V.
This is clearly the case if cl_M(x)=x. Assuming otherwise, and there is y∈ℐ where |y∩ cl_M(x)|>|x|, then by the exchange property between independent sets, there is e∈ y∩ cl_M(x)∖ x where x∪{e}∈ℐ. This implies (y∩ cl_M(x)∖ x)⊈ cl_M(x)∖ x, a contradiction.
Lemma <ref> lets us sharpen the upper bound on ss values for highly non-uniform matroids.
[][end]
Given a matroid M=(V,ℐ) (may contain loops) and integer r≥1, then for any P∈ℐ^r,
ss(P)≤min_x∈ℐ{ g(|V|-|cl_M(x)|,r_M-⌊ n_x⌋,r)+g(|cl_M(x)|,⌈ n_x⌉,r)},
where n_x=min{r_M|cl_M(x)|/|V|,|x|}. There exists a matroid where equality holds.
Let P be a r-size multiset of independent sets in M, and x∈ℐ, then Lemma <ref> and the properties of matroids imply that for all y∈ P, |y∩ cl_M(x)|≤|x|. This means ss(P)≤ g(|cl_M(x)|,|x|,r)+g(|V|-|cl_M(x)|,r_M,r), which is the sum of maximum achievable ss values within cl_M(x) and V∖ cl_M(x), respectively. This bound can be sharpened by the fact that |y|≤ r_M for all y∈ P. Since ss(P) is maximized when elements in V are included in equal numbers of sets in P. This means assuming an ideal scenario where independence is not violated in any other way, P maximizing ss must minimize the gap between ∑_y∈ P|y∩ cl_M(x)|/|cl_M| and ∑_y∈ P|y∖ cl_M(x)|/|V∖ cl_M(x)|. Since g can be used to characterize maximum ss in such a setting, we have any such P must satisfy
ss(P)≤ g(|V|-|cl_M(x)|,r_M-⌊ n_x⌋,r)+g(|cl_M(x)|,⌈ n_x⌉,r),
where n_x=min{r_M|cl_M(x)|/|V|,|x|}. Since this holds for any x∈ℐ, the claim follows.
We give two more useful inequalities regarding function g.
[][end]
Let c,m≥1, {a_i}_i=1^m, {b_i}_i=1^m be non-negative integers, g(∑_i=1^ma_i,∑_i=1^mb_i,c)≥∑_i=1^mg(a_i,b_i,c).
Let {U_i}_i=1^m be disjoint sets where |U_i|=a_i for i=1,…,m, V=⋃_i=1^mU_i, A={ x⊆ V:|x|≤∑_i=1^mb_i}, B={ y⊆ V:∀ i=1,…,m,|y∩ U_i|≤ b_i}. Define A' and B' as the collections of c-size multisets of sets in A and B, respectively. Given the lack of any other constraint, Theorem <ref> implies that g(∑_i=1^ma_i,∑_i=1^mb_i,c)=max_P∈ A'ss(P). Using the decomposition of ss, we also have ∑_i=1^mg(a_i,b_i,c)=max_P∈ B'ss(P). The claim follows from A'⊇ B', which is the case as A⊇ B.
[][end]
Given integers a≥2, c≥1, b∈[1,a-1), l∈[0,⌈ c/2⌉), m=⌈ c/2⌉-l, a non-increasing integer sequence (a_i)_i=1^m such that a_i∈[0,a] for all i, a_1≥ b, bm-∑_i=1^ma_i≤ l(a-b)-c and l(a-b)+a_1-b≥ c, then
a∑_i=1^lδ(c,i-1)+∑_i=l+1^l+mδ(c,i-1)a_i-l≥ g(b,b,c)+g(a-b,1,h)-h(h-c),
where h=l(a-b)+a_1-b+min{∑_i=2^ma_i-b(m-1),0}.
Let Δ=a-b, the left hand side be L, and j∈[2,m] be such that a_j-1≥ b and a_j<b, we split L=L_1+L_2 where
L_1=b∑_i=1^l+j-1δ(c,i-1)+∑_i=l+j^l+mδ(c,i-1)a_i-l and L_2=Δ∑_i=1^lδ(c,i-1)+∑_i=l+1^l+j-1δ(c,i-1)(a_i-l-b).
We have
L_1-g(b,b,c)=L_1-b∑_i=1^⌈ c/2⌉δ(c,i-1)=L_1-b∑_i=1^l+mδ(c,i-1)=∑_i=l+j^l+mδ(c,i-1)(a_i-l-b).
If ∑_i=2^ma_i<b(m-1) then let h'=⌊ h/Δ⌋, k=max{l,h'+1} and d∈[0,Δ) such that h≡ dΔ, we have
L_2-g(Δ,1,h) =L_2-∑_i=1^h'δ(h,i-1)-dδ(h,h')=L_2-∑_i=1^h'δ(c,i-1)-dδ(c,h')-h(h-c)
=(Δ-d)δ(c,h')+Δ∑_i=h'+2^lδ(c,i-1)+∑_i=k+1^l+j-1δ(c,i-1)(a_i-k-b)-h(h-c)
≥[b(m-1)-∑_i=2^ma_i]δ(c,l+j-2)+∑_i=l+1^l+j-1δ(c,i-1)(a_i-l-b)-h(h-c)
≥[b(j-2)-∑_i=2^j-1a_i]δ(c,l+j-2)+∑_i=l+1^l+mδ(c,i-1)|a_i-l-b|-h(h-c)
≥∑_i=l+2^l+j-1δ(c,i-1)(b-a_i-l)+∑_i=l+1^l+j-1δ(c,i-1)(a_i-l-b)-L_1+g(b,b,c)-h(h-c)
≥δ(c,l)(a_1-b)-L_1+g(b,b,c)-h(h-c)≥ g(b,b,c)-L_1-h(h-c),
where the inequalities follow from δ being decreasing in the second parameter, and b-a_i being non-positive for all and only i∈[1,j-1]. Now, if ∑_i=2^ma_i≥ b(m-1), then ∑_i=2^j-1a_i-(j-2)b≥ (m-j+1)b-∑_i=j^ma_i and
L_2-g(Δ,1,h) =L_2-Δ∑_i=1^lδ(c,i-1)-(a_1-b)δ(c,l)-h(h-c)=∑_i=l+2^l+j-1δ(c,i-1)(a_i-l-b)-h(h-c)
≥[∑_i=2^j-1a_i-(j-2)b]δ(c,l+j-2)-h(h-c)≥[(m-j+1)b-∑_i=j^ma_i]δ(c,l+j-2)-h(h-c)
≥∑_i=l+j^l+mδ(c,i-1)(b-a_i-l)-h(h-c)=g(b,b,c)-L_1-h(h-c).
In both cases, L_1+L_2≥ g(b,b,c)+g(Δ,1,h)-h(h-c), and the claim follows.
Visualization of Function g
We plot the values of g(a,b,c) with various values of a, b and c in Figure <ref>. We set a∈[100,500], b=⌊λ a⌋ where λ∈[0.05,0.5] at step size 0.05, and c∈[10,90] at step size 10. Note that g is monotone.
Firstly, we can see that g increases proportionally in a (and b) under fixed b/a, i.e. g(a,b,c)∈Θ(a) assuming constant b/a and c. Secondly, g increases in b/a at diminishing rate, and is plateaued when b exceeds ⌈ a/2⌉ by definition. Thirdly, g increases proportionally in c^2, i.e. g(a,b,c)∈Θ(c^2) assuming constant a and b.
§ GREEDY ALGORITHMS FOR DIVERSE SOLUTIONS
We describe two different greedy algorithms to obtain an approximation to the problem (<ref>), by incrementally building solutions. They are greedy in the sense that they select, in each step, the “best” choice out of a selection pool. Here, choice refers to a solution-element pair where the element is added into the solution. The differences between the two algorithms lie in how this pool is defined, and the selection criteria. In both algorithms, the pool is controlled by a parameter, which determines a trade-off between objective values and diversity.
In the following, we claim several worst-case bounds, i.e. for all settings I (each including a problem instance and an algorithm parameter value) in a universe clear from the context, p(I)≥ q(I) for some quantities p and q of the setting (e.g. optimal value, worst-case diversity, etc.) A bound is tight if there is a setting I' where p(I')=q(I'). It is nearly tight if instead we have p(I')=q(I')+ϵ for an arbitrary small ϵ>0 independent from other factors.
§.§ Diversifying Greedy With Common Elements
The first approach, outlined in Algorithm <ref>, is a deterministic version of a heuristic for a special case of problem (<ref>), proposed in <cit.>. The idea is to first have all solutions share common elements selected by the classical greedy algorithm, so as to efficiently obtain some objective value guarantee. Then, in the second phase (starting from line <ref>), each solution is finalized with added elements that maximize ss, which are precisely those least represented. To be specific, in each iteration, the algorithm looks at all solution-element pairs which maintain independence, and selects a pair based on criteria, the first of which maximizes diversity (line <ref>). This approach is simple and efficient, but prevents the common elements from contributing to diversity. Here, we formulate the algorithm to take the number of common elements as an input (b), which cannot exceed the rank of the matroid constraint.
We observe that since the image of ss is polynomially bounded in size, there are frequently many equivalent choices in each iteration in the second phase, motivating the use of tie-breaking rules, which are formulated as lexicographical at Line <ref>. Of note is the second rule, which prioritizes solutions with the fewest remaining choices. The idea is to minimize the shrinkage of the pool among under-represented elements (the inclusion of which incurs large marginal gains in diversity) with a simple heuristic. We show that this tie-breaking rule helps guarantee a non-trivial lower bound of ss value under a general matroid, whereas it makes no difference under a uniform matroid. The other tie-breaking rules aim to improve the minimum objective value whenever possible.
The time complexity of Algorithm <ref> is O(b|V|+r(r_M-b)(|V|-b)) in both value oracle model and independence oracle model. The algorithm may not return r bases if rr_M is sufficiently large relative to |V|. Additionally, having all solutions sharing elements can be undesirable in some applications. Note the condition n_v(P)<⌈ r/2⌉ at line <ref> ensures ss(P) never decreases during the second phase.
Let 𝒜(f,S,b,r) be the collection of possible outputs from Algorithm <ref> when run with inputs f, S, b, r. We first show that in uniform constraint case, the algorithm returns a constant diversity for each input configuration.
[][category=greedymonotoneuniform,end]
For any monotone submodular f over 2^V, integers K≥1, r≥2, b∈[0,K), and let α=1-e^-b/K, ∀ P∈𝒜(f,𝒰_V,K,b,r), ss(P)=g(|V|-b,K-b,r), thus Algorithm <ref> is g(|V|-b,K-b,r)/g(|V|,K,r)-approximate for the (f,𝒰_V,K,ss,r,α)-instance. Moreover, this ratio bound is tight for any |V|≥1, r≥2, K∈[1,|V|], and b∈[0,K).
For any monotone submodular f and uniform matroid M=(V,ℐ), the output of Algorithm <ref>, P, contains r (1-e^-b/K)-approximations of f over ℐ <cit.>. Thus, it is feasible to the (f,ss,r,α)-instance. The value of ss(P) is completely defined in the second phase.
Let V'=V∖ x be the set of remaining elements in the second phase (note that x=cl_M(x) for any non-maximal x∈ℐ if M is uniform), P_t be the multiset after t steps in the second phase, which is terminated after T>0 steps, we have ss(P_0)=0. Let n_t,i=|{x∈ P_t: i∈ x}| for i∈ V, we now characterize n_T,i for all i∈ V' in order to derive ss(P_T). Since the algorithm adds one element into a solution in each step, we have n_t+1,i≤ n_t,j for any i∈ V'. Let i_t∈ V be the element added at step t, we have δ_t=ss(P_t)-ss(P_t-1)=r-2n_t-1,i_t-1, so i_t∈_i∈ V'n_t-1,i from the greedy step and the fact that M is uniform. Therefore, max_i∈ V'n_t,i=⌈ t/|V'|⌉ and min_i∈ V'n_t,i=⌊ t/|V'|⌋ for all t∈[0,T], implying δ_t=r-2⌊ (t-1)/|V'|⌋-1. Let h=min{K-b,|V'|/2} and H=⌈ r/2⌉⌈ h⌉+⌊ r/2⌋⌊ h⌋, we show T=H by considering two cases:
* If K-b≤|V'|/2, then T=r(K-b) since r feasible solutions cannot contain more than rK elements in total, and r-2⌊ [r(K-b)-1]/|V'|⌋-1≥ r-2⌊ r/2-1/|V'|⌋-1≥0, implying that step r(K-b) does not decrease ss. Moreover, in this case, H=r(K-b)=T, proving the claim.
* If K-b>|V'|/2, then T=H<r(K-b) since r-2⌊ (H-1)/|V'|⌋-1≥0 and r-2⌊ H/|V'|⌋-1=-1, implying that the second phase is terminated after exactly H steps.
Applying Equation (<ref>), we get ss(P)=ss(P_T)=g(|V'|,K-b,r)=g(|V|-b,K-b,r), proving the claim.
The tightness follows from a simple construction. Let s∈[1,|V|], α=1-e^-b/s, and f be any monotone submodular over 2^V such that min_x⊆ V:|x|=sf(x)≥αmax_x⊆ V:|x|=sf(x), then the multiset P^*={{(is+j |V|)+1}_j=0^s-1}_i=0^r-1 contains α-approximations of f over s-rank uniform constraint, and ss(P^*)=g(|V|,s,r).
Due to the monotonicity of g, the diversity guarantee in Theorem <ref> decreases with b. Specifically, Lemma <ref> implies this ratio bound is at least 1-b/K, which is tight in many cases. We also observe this linear relationship frequently in our experimental results (Section <ref>). This also means by setting b such that b/K is constant, the algorithm guarantees simultaneously constant approximation ratios in both objective values and diversity, independent of |V| and r. Additionally, with α=1-e^-b/K, we have 1-b/K=1+ln(1-α), giving a direct objective-diversity trade-off curve (in terms of ratios) within α∈[0,1-1/e].
For general matroids, we can infer the objective approximation guarantee from Algorithm <ref> as a function of parameter b, by using an important result in <cit.>.
[][category=greedymonotone,end]
Algorithm <ref> under a matroid M=(V,ℐ) outputs [1-(1-1/r_M)^min{b,k}]-approximations of f over ℐ where k=min{|z|:z∈2^V∖ℐ}-1. Additionally, these solutions are b/(2r_M)-approximations.
It suffices to show the bound for the solution x obtained after the first phase, due to f being monotone. The ratio b/(2r_M) follows from the fact that the classical greedy guarantees 1/2-approximation in maximizing a monotone submodular function over a matroid <cit.>. Let λ=1-(1-1/r_M)^min{b,k}, O=_y∈ℐf(y), X={y⊆ V: x⊂ y}, 𝒥=ℐ∖ X, we have that 𝒥 is an independence system, and x is a solution obtainable by the classical greedy algorithm over 𝒥. Furthermore, by definition of 𝒥, we have min{|z|:z∈2^V∖𝒥}=min{k,b}+1. Therefore, Theorem 2.2 in <cit.> implies that f(x)≥λmax_y∈𝒥f(y). Assuming O∖ X≠∅, then max_y∈𝒥{f(y)}=max_y∈ℐf(y), so the claim follows. Otherwise, let z∈ O, then z⊃ x, and f(x)/f(z)≥|x|/|z|≥ b/r_M≥λ. Indeed, let δ_i be the marginal difference from adding the i-th element by the classical greedy algorithm over 𝒥, we have that for all v∈ x and u∈ z∖ x, x∖{v}∪{u}∈𝒥, so by the greedy selection and f being submodular, f(x∪{v})-f(x)≤δ_i for all i=1,…,b. The claim follows since
f(z) ≤ f(x)+∑_v∈ z∖ x[f(x∪{v})-f(x)] (f is submodular)
≤ f(x)+|z∖ x|δ_b≤ f(x)+|z|-|x|/|x|f(x)=|z|/|x|f(x) (δ_i≥δ_i+1)
[][category=greedymonotonematroid,end]
For any monotone f over 2^V, matroid M=(V,ℐ), and integers r≥2, b∈[0,r_M), ∀ P∈𝒜(f,𝒰_V,K,b,r), ss(P)≥ g(r_M-b-1,r_M-b-1,r)+g(m,1,r), where m=|V∖ cl_M(x)|-r_M+b+1 and x is the solution obtained in the first phase of the algorithm. Moreover, this bound is tight for any |V|≥1, r≥2, matroid rank r_M∈[1,|V|], b∈[0,s) and m∈[1,|V|-r_M+b+1].
Let V'=V∖ cl_M(x) be set of remaining elements in the second phase, we have |V'|≥ r_M-b. Let P_t be the multiset after t steps in the second phase, which is terminated after T≤ r(r_M-b) steps, count values n_t,i=n_i(P_t) for i∈ V, and V_t=⋃_y∈ P_tV∖ cl_M(y)⊆ V' be the set of elements that can be added to a solution in P_t, n_t=min_i∈ V_tn_t,i, we see that V_0=V' and for all t∈[0,T), δ_t=ss(P_t+1)-ss(P_t)=r-2n_t-1 from the greedy selection. Also, V_i⊇ V_j whenever i≤ j, and the greedy selection implies min_i∈ V_tn_t,i≥max_i∈ V_tn_t,i-1 at every step t, which can be shown by induction. It holds for t=0 as n_0,i=0 for all i∈ V'. Assuming this holds for t=k, the greedy selection guarantees that the property is maintained within V_k at step k+1, so it must hold within V_k+1⊆ V_k as well.
Given this invariant, we can divide the second phase into cycles: we say step t is in cycle j if n_t=j. Let t_j be the first time step of cycle j, we see that n_t_j,i=j for all i∈ V_t_j. With the tie breaking rule, at any time t', if n_t',i=n' for all i∈ V_t', then the algorithm builds up a base in the next consecutive steps, followed by adding elements with count n' until all elements in V_t' have count n'+1. Let m=|V'|-r_M+b+1, we show, by induction, that for any j≤⌊ r/j⌋ there are at least r-mj solutions in P_t_j unchanged in the second phase, i.e. equal x. This clearly holds for j=0, since no solution is changed yet. If r≥ m, this holds for j=1, since in cycle 0, a base is obtained with r_M-b elements in |V'|, leaving m-1 elements with count 0 to add to other solutions, resulting in at most m changed solutions by the start of cycle 1. Assuming this holds for j=k≤⌊ r/m⌋-1, x∈ P_t_k, so V_t_k=V'. This means V_t=V' for all t∈[0,t_j], so t_j=j|V'|. Now, starting from cycle k, the algorithm builds the next base first. Due to the tie breaking rule, the algorithm does this using a non-base solution already changed in previous cycles[If there is no such solution, then it must be that |V'|=r_M-b and m=1. In this case, the algorithm achieves ss value of g(r_M-b,r_M-b,r) which is at least g(r_M-b-1,r_M-b-1,r)+g(m,1,r) by Lemma <ref>.], meaning at most r_M-b-1 are needed to build the base within each cycle after 0, and that x remains in the set after this base is obtained. If building this base in cycle k only needs r_M-b-1-ϵ steps, then this non-base contains b+1+ϵ elements at step t_k, so there must be at least r-km+ϵ unchanged solutions in P_t_k. Since there are m+ϵ elements with count n_t_k afterwards, the number of unchanged solutions after cycle k is at most r-(k+1)m. With this, for all j∈[0,⌊ r/m⌋-1], t_j+1-t_j=|V'|. Therefore, there are at least l=min{⌊ r/m⌋,⌈ r/2⌉} cycles containing |V'| steps. If l=⌈ r/2⌉, the algorithm terminates after l cycle since subsequent steps incur negative changes to ss, thus achieving ss value of g(|V'|,r_M-b,r)≥ g(r_M-b-1,r_M-b-1,r)+g(m,1,r) where the inequality follows from Lemma <ref>. Otherwise, it must be that m≥2, and we also have ss(P_t_l)=|V'|∑_i=1^lδ(r,i-1). If m=2, each step in cycle l does not increase ss, so the algorithm achieves the same ss value. Therefore, we can assume m>2 and l<⌈ r/2⌉.
Let c∈[0,m) be such that r≡ c m, since t_l=l|V'| and P_t_j contains at least j bases, P_t_l must contain at least c unchanged solutions. If the algorithms obtains a first base in cycle l in r_M-b-1-ϵ steps, then P_t_l contains at least c+ϵ unchanged solutions. Since this base is built from a changed solution in P_t_l, there must be at least r_M-b-1 steps in cycle l. Furthermore, t_j-t_j-1 is non-increasing since V_t shrinks with t. Also, for any j>l, aside from the j bases built at the start of each cycle in P_t_j, the rest must contain at least in total r-j elements not in x. This means t_j≥ j(r_M-b)+r-j for all j>l, so t_j-t_l≥ j(r_M-b)-l|V'|+r-j=(j-l)(r_M-b-1)+r-lm. Let b=⌈ r/2⌉-l and x_j=t_j+l-t_j+l-1 for j∈[1,b] (x_j=0 if the algorithm terminates before cycle j), then ∑_i=1^bx_i=t_b+l-t_l≥ b(r_M-b-1)+r-lm, so the conditions of Lemma <ref> are satisfied. Therefore, applying it and the fact that each step in cycle j adds δ(r,j) to the ss value gives T=t_l+b and
ss(P_T)=|V'|∑_i=1^lδ(r,i-1)+∑_i=l+1^l+bδ(z,i-1)x_i-l≥ g(r_M-b-1,r_M-b-1,r)+g(m,1,h)-h(h-r),
where h≥ lm+∑_i=1^bx_i-b(r_M-b-1)≥ r. Since m>2, we have then g(m,1,h)-h(h-r)≥ g(m,1,r), so the claim follows.
We show tightness by construction. Let U be a (|V|-s+b-m+1)-size subset of V where s∈[1,|V|], {U_i}_i=1^b be a partitioning of U where U_i≠∅ for all i, and A={a_1,…,a_m} be a m-size subset of V∖ U. Denoting characteristic vector of x⊆ V with (x_i)_i=1^|V|, let f be defined over 2^V with f(x)=∑_i∉ Ud_ix_i+∑_i=1^bc_imax_j∈ U_ix_j where {d_i}_i∉ U and {c_i}_i=1^b are non-negative reals and min_ic_i≥max_jd_j. Finally, let M=(V,ℐ) be a matroid of rank s where ℐ={x⊆ V:∑_i=1^mx_a_i≤1∧∀ i=1,…,b,∑_j∈ U_ix_j≤1}. Algorithm <ref>, run with inputs f, ss, M, b, r achieves after the first phase a solution x where |x∩ U_i|=1 for all i=1,…,b. It then diversifies the solution set over V∖ U since cl_M(x)=U. Since each solution cannot intersect with A at more than one element, the algorithm returns P={ x∪((V∖ U)∖ A)∪{a_(π(i) m)+1}}_i=1^⌈ r/2⌉∪{ x∪{a_(π(i) m)+1}}_i=⌈ r/2⌉+1^r for some permutation π over {1,…,r}. We have
ss(P)=ss({y∖ A: y∈ P})+ss({y∩ A: y∈ P})=g(r_M-b-1,r_M-b-1,r)+g(m,1,r).
It is important to note that while the bound in Theorem <ref> can be small for any positive choice of b if the closure of the common elements set x is large, sufficiently large ones (e.g. |cl_M(x)|/|x|>|V|/r_M) also lower the upper bound on maximum diversity, according to Lemma <ref>.
§.§ Simultaneous Greedy With Representation Limits
The second approach, outlined in Algorithm <ref>, is inspired by the SimultaneousGreedys algorithm proposed in <cit.>, which obtains a set of disjoint solutions, in which the best one provides an approximation guarantee. Since for our problem, all solutions need to be sufficiently good, we make crucial changes to adapt the algorithm to the task. Firstly, each element can appear in multiple solutions, the maximum number of which is given as an input (l). This simultaneously expands the selection pool for each solution in each iteration, which helps with quality, and controls the amount of representation in the output each element enjoys, which guarantees some diversity. Secondly, a single element (v^*) is allowed to be included in all solutions, so a non-trivial quality guarantee is possible, as there are instances where excluding an element ensures that the solution is arbitrarily bad. Finally, the selection criteria, especially the first one, enforce building solutions evenly, so the worst one does not fall too far behind. This allows us to derive a non-trivial lower bound on the objective value of every solution in the output.
Compared to Algorithm <ref>, this algorithm does not maximize diversity directly, but guarantees it indirectly by imposing additional constraints. Since these constraints are on elements' representation, it can be applied to the problem (<ref>) with any diversity measure that can be formulated by elements' representation, such as entropy <cit.>.
The time complexity of Algorithm <ref> is O(rr_M|V|) in both value oracle model and independence oracle model. Like Algorithm <ref>, it may not return r bases if rr_M is sufficiently large and l is sufficiently small. We remark that the inclusion of the initial element v^* in all solutions is meant to deal with pathological instances; this might be avoided with a more complex heuristic. With a view to simplicity, we choose not to pursue this further in this work.
We observe that if l=r, Algorithm <ref> must return solutions obtainable by the classical greedy algorithm since if there is a v∈ V that cannot be added to a solution x due to the new constraint, then v∈ x. However, one can construct instances where it is guaranteed to achieve ss value of 0, even when restricted to uniform matroids and linear objective functions. Therefore, we only consider cases where l<r. We show the extent to which diversity is guaranteed, simply from limiting elements' representations.
Similarly, we use ℬ(f,S,r,l) to denote the collection of possible outputs from Algorithm <ref> when run with inputs f, S, r, l. Given a matroid M=(V,ℐ), for any P∈ℬ(f,ℐ,r,l) and x∈ P, let P_t be P at iteration t (P_0={{v^*}}^r), t_x,i be the iteration in which the i-th element is added to x (counting v^*), x^(i) be x right after that iteration, V_i={ v∈ V: n_v(P_t_x,i-1)≥ l}, W_i=cl_M(x^(i-1)) and U_i=V_i∪ W_i∖ x^(i-1). Intuitively, U_i contains elements that cannot be added to x at step i.
To establish objective value guarantees, we first extend Proposition 2.2 in <cit.>.
[][end]
Given non-negative non-increasing (λ_i)_i=1^T and non-negative (σ_i)_i=1^T where ∑_i=1^kσ_i≤ ak for all k=1,…,T-1 and ∑_i=1^Tσ_i≤ aT+c for some a,c≥0, then ∑_i=1^Tσ_iλ_i≤(a+c/T)∑_i=1^Tλ_i.
Let σ_i'=σ_i for all i=1,…,T-1, and σ_T'=σ_T-c, we have ∑_i=1^kσ_i'≤ ak for all k=1,…,T, so Proposition 2.2 in <cit.> implies ∑_i=1^Tσ_i'λ_i≤ a∑_i=1^Tλ_i. The claim follows from λ_i being non-increasing in i, which implies
∑_i=1^Tσ_iλ_i=∑_i=1^Tσ_i'λ_i+cλ_T≤ a∑_i=1^Tλ_i+c/T∑_i=1^Tλ_i=(a+c/T)∑_i=1^Tλ_i.
This gives the following helpful lower bound.
[][category=greedymonotone2ratio,end]
For any Y∈(2^V)^*, if for some a,q≥0, ∑_v∈ U_in_v(Y)≤ a(i-1) for all i=1,…,|x| and ∑_v∈ U_|x|+1n_v(Y)≤ a(i-1)+q, then min{ a+q/|x|+|Y|,∑_y∈ Y|y|} f(x)≥∑_y∈ Yf(y).
We use similar ideas in <cit.>. Let δ_i=f(x^(i))-f(x^(i-1)), we have δ_i≥ f(x^(i-1)∪{v})-f(x^(i-1)) for all v∈ U_i+1∖ U_i due to greedy selection. Furthermore, since δ_i is non-increasing in i due to submodularity and greedy selection for each solution in P, we have
∑_y∈ Yf(y) ≤∑_y∈ Yf(y∪ x)≤|Y|f(x)+∑_y∈ Y∑_v∈ y∖ x[f(x∪{v})-f(x)] (properties of f)
=|Y|f(x)+∑_i=2^|x|+1∑_v∈ U^i∖ U^i-1n_v(Y)[f(x∪{v})-f(x)] (U^i⊇ U^i-1)
≤|Y|f(x)+∑_i=1^|x|[∑_v∈ U_i+1n_v(Y)-∑_v∈ U_in_v(Y)]δ_i (greedy selection)
≤(a+q/|x|+|Y|)f(x), (Lemma <ref>)
where the last inequality follows from Lemma <ref>. Finally, from v^*∈ x, submodularity and monotonicity of f imply
∑_y∈ Yf(y)≤∑_y∈ Y∑_v∈ yf({v})≤∑_y∈ Y|y|max_v∈ Vf({v})=∑_y∈ Y|y|f({v^*})≤∑_y∈ Y|y|f(x).
We observe the following bound regarding the shrinkage of the selection pool, which when combined with the above result leads to objective value guarantees.
[][end]
For all i≥1, |V_i∖ x^(i-1)|≤(i-1)(r-1)/l.
The claim holds for i=1 since V_1={v^*}. Let z^(i)=x^(i)∖{v^*}, we have ∅=U_0=U_1⊆ U_2⊆…⊆ U_|x|+1=V∖ x. Since t_x,≤1=0, V_i⊇{v^*} for i≥2, and from the fact that one element is added in each iteration
|V_i∖ x^(i-1)| ≤⌊t_x,i-1-l|V_i∩ z^(i-1)|-|z^(i-1)∖ V_i|/l⌋
≤⌊(t_x,i-1-|z^(i-1)|)/l⌋ (l≥1)
≤⌊((i-1)r-1-i+2)/l⌋ (t_x,i≤(i-1)r due to first selection rule)
=⌊(i-1)(r-1)/l⌋≤(i-1)(r-1)/l.
Regarding diversity, we show the lower bound on the ss value as the algorithm progresses.
[][category=greedymonotone2ss,end]
For all t≥ 0, ss(P_t)≥⌊ t/l⌋ l(r-l)+c(r-c) where c∈[0,l) such that c≡ t l.
We show this bound holds tightly for any r-size multiset Q of subsets of V where ∑_x∈ Q|x|=t and max_u∈ Vn_u(Q)≤ l (v^* does not contribute to ss value so we ignore it). This clearly holds for t=0. Assuming it holds tightly for all t∈[0,k] for some k≥0, let c∈[0,l) where c≡ k l, and Q be the r-size multiset such that ∑_x∈ Q|x|=k+1 and max_u∈ Vn_u(Q)≤ l, v∈ x for some x∈ Q and Q'=Q∖{x}∪{x∖{v}}, we have ss(Q)=ss(Q')+δ(r,n_v(Q'))=ss(Q')+r-2n_v(Q')-1, and the induction hypothesis applies to Q' since ∑_x∈ Q'|x|=k and max_u∈ Vn_u(Q')≤ l. If n_v(Q')≤ c, then
ss(Q)≥⌊ k/l⌋ l(r-l)+c(r-c)+r-2c-1=⌊ k/l⌋ l(r-l)+(c+1)(r-c-1).
If c=l-1, then the right hand side becomes ⌊(k+l)/l⌋ l(r-l)=⌊(k+1)/l⌋ l(r-l), otherwise we have ⌊(k+1)/l⌋=⌊ k/l⌋ (both following from definition of c), so the claim holds for Q. Note that in this case, the bound is tight if the equality holds in the induction hypothesis and n_v(Q')=c. Now assuming n_v(Q')>c, then k≥ n_v(Q')>0. Let Q”={x∖{v}: x∈ Q}, we have ∑_x∈ Q”|x|=k-n_v(Q')<k, and using the induction hypothesis, we get
ss(Q) =ss(Q”)+n_v(Q)[r-n_v(Q)]≥⌊(k-1)/l⌋ l(r-l)+n_v(Q)[r-n_v(Q)]+[c+l-n_v(Q')][r+n_v(Q')-h-l]
≥⌊(k-1)/l⌋ l(r-l)+l(r-l)+(c+l-l+1)(r+l-1-c-l)=⌊(k-1+l)/l⌋ l(r-l)+(c+1)(r-c-1)
≥⌊ k/l⌋ l(r-l)+(c+1)(r-c-1),
where the second inequality follows from n_v(Q')=n_v(Q)-1 and that for any 0≤ a≤ b,c≤ d≤ e, a(e-a)+d(e-d)≤ b(e-b)+c(e-c), and the last inequality follows from l≥1. Furthermore, c<n_v(Q')=n_v(Q)-1≤ l-1, so ⌊(k+1)/l⌋=⌊ k/l⌋, and the claim holds for Q, completing the induction proof.
With Lemma <ref>, <ref> and <ref>, the following objective-diversity trade-off guarantees can be inferred.
[][category=greedymonotone2uniform,end]
Given monotone submodular f over 2^V, integers r≥2, l∈[1,r) and K∈[1,|V|], then for all P∈ℬ(f,𝒰_V,K,r,l) and k∈[1,(r-1)K/l],
min{r-1/l+1,k}min_x∈ Pf(x)≥max_|y|≤ kf(y) and ss(P)≥ l(r-l)⌊ h/l⌋+c(r-c),
where h=min{r(K-1),l(|V|-1)} and c∈[0,l) such that c≡ h l.[The latter bound holds trivially at l=r.] Moreover, the former bound is nearly tight when K≥ r+l-1, and the latter bound is tight for any |V|≥1, r≥2, K∈[1,|V|], and l∈[1,r].
Let x be an arbitrary output solution, we see that from Lemma <ref>, for any y⊆ V where |y|≤ k, |y∩ U_i|≤(r-1)(i-1)/l for i∈[1,|x|] since W_i=∅, and we show that this also holds for i=|x|+1. If |x|=K, it holds since (r-1)|x|/l=(r-1)K/l≥ k≥|y| when r≥2 and l<r. If |x|<K then W_|x|+1=∅, so U_|x|+1=V_|x|+1∖ x. The inequality then follows directly from Lemma <ref> which asserts the upper bound on |U_|x|+1|. Combining this with Lemma <ref> and |y|≤ k yields the first inequality. For tightness, let
f(x)=max{x_1,(1-ϵ)x_r+l}+∑_i=2^r+l-1x_i-ϵ/lrx_1(∑_i=2^rx_i)(∑_i=r+1^r+l-1x_i),
observe f is monotone submodular for any ϵ∈[0,1]. Assuming w.l.o.g. that ϵ>0, v^*=1 and that the last tie-breaking prioritizes smallest labels, there must be one solution x' returned by Algorithm <ref> whose second element is in {r+1,…,r+l-1} while the rest's are contained in {2,…,r} due to the fourth tie-breaking rule. After adding the l-th element to each solution, x'^(l)={1,r+1,…,r+l-1}, meaning any element available to add to x' at this point induces a marginal gain less than 1 with ϵ>0. Since other solutions currently do not contain elements in {r+1,…,r+l}, and elements in {2,…,r} can still be added to them, inducing marginal gains 1, the algorithm prioritizes them due to the second tie-breaking rule[Note that swapping the second and third rules makes no difference at this point as all solutions have the same objective value.]. When finally adding the l+1-th element to x', n_v(P)=l for all v∈{2,…,r}, so x' enjoys no further marginal gain, resulting in f(x')=l. We see that for K≥ r+l-1, y={2,…,r+l} is feasible under K-rank uniform constraint and f(y)=r+l-1-ϵ=[(r-1)/l+1]f(x')-ϵ.
The second inequality follows from Lemma <ref> and that the Algorithm <ref> runs for exactly h=min{r(K-1),l(|V|-1)} iterations. To show tightness, let f(x)=∑_i=1^|V|c_ix_i where (x_i)_i=1^|V| is the characteristic vector of x⊆ V and (c_i)_i=1^|V| is a decreasing non-negative real sequence, Algorithm <ref> run on this instance under K-rank uniform constraint returns P such that n_1(P)=r, n_v(P)=l for v=2,…,⌊ h/l⌋, n_⌊ h/l⌋+1(P)=c where c∈[0,l) such that c≡ h l, and n_v(P)=0 for v≥⌊ h/l⌋+2. This means ss(P)=⌊ h/l⌋ l(r-l)+c(r-c).
[][category=greedymonotone2matroid,end]
Given monotone submodular f over 2^V, matroid M=(V,ℐ), integers r≥2 and l∈[1,r), then for all P∈ℬ(f,ℐ,r,l)
min{r-1/l+2,r_M}min_x∈ Pf(x)≥max_y∈ℐf(y) and ss(P)≥ l(r-l)(r_M-1).
Moreover, the former bound is nearly tight, and the latter bound is tight for any |V|≥1, r≥2, matroid rank r_M∈[1,|V|], and l∈[1,r].
We have for i=1,…,|x|+1, r_M(W_i)=r_M(x^(i-1))=i-1 by definition, and
r_M(U_i) ≤ r_M(V_i∖ x^(i-1))+r_M(W_i∖ x^(i-1))-r_M(V_i∩ W_i∖ x^(i-1)) (r_M is submodular)
≤|V_i∖ x^(i-1)|+i-1 (r_M is monotone)
≤[(r-1)/l+1](i-1), (Lemma <ref>)
where the last inequality follows from Lemma <ref>. This means for any y∈ℐ, |y∩ U_i|≤[(r-1)/l+1](i-1) by Lemma <ref>. Applying Lemma <ref> and |y|≤ r_M yields the first claim. For tightness, let
f(x)=max{x_1,(1-ϵ)x_l+1}+∑_i=2^lmax{x_i,x_i+l}+(1-ϵ x_1/lr)∑_i=2l+1^3lx_i+∑_i=3l+1^3l+r-1x_i-ϵ x_1/lr(∑_i=2^lmax{x_i,x_i+l})(∑_i=3l+1^3l+r-1x_i),
observe f is monotone submodular for any ϵ∈[0,1]. We construct the matroid M=(V,ℐ) where ℐ={x⊆ V:∑_i=1^l(x_i+x_i+2l)≤ l}. Assuming w.l.o.g. that ϵ>0, v^*=1, it can be the case that there is one solution x' returned by Algorithm <ref> under M whose second element is in {2,…,l} while the rest's are contained in {3l+1,…,3l+r-1} due to the fourth tie-breaking rule. After adding the l-th element to each solution, we can assume that x'^(l)={1,…,l}, meaning any element available to add to x' at this point induces a marginal gain less than 1. Since other solutions currently do not contain elements in {2,…,3l}, and elements in {3l+1,…,3l+r-1} can still be added to them, inducing marginal gains 1, the algorithm prioritizes them due to the second tie-breaking rule. Note that elements in {l+1,…,3l} are ignored in the next r-1 iterations since they induce marginal gains less than 1 with ϵ>0. When finally adding the l+1-th element to x', n_v(P)=l for all v∈{3l+1,…,3l+r-1}, and elements in {2l+1,…,3l} cannot be considered, so x' enjoys no further marginal gain, resulting in f(x')=l. We have y={l+1,…,3l+r-1}∈ℐ and f(y)=r+2l-1-ϵ=[(r-1)/l+2]f(x')-ϵ.
For the second claim, we see that since the algorithm runs for at least l(r_M-1) iterations, we have ss(P)≥ l(r-l)(r_M-1) from Lemma <ref> and that the ss value cannot decrease below l(r-l)(r_M-1) during iterations after l(r_M-1). To show tightness, let f(x)=∑_i=1^|V|c_ix_i where (x_i)_i=1^|V| is the characteristic vector of x⊆ V and (c_i)_i=1^|V| is a decreasing non-negative real sequence, M=(V,ℐ) be a s-rank matroid to be constructed where s∈[1,|V|], and P be an output of Algorithm <ref> when run with inputs f, ℐ, r, l. For matroid M, we specify ℐ={ x⊆ V: x_1+∑_i=s+1^|V|≤1}, then since for all y∈ P, 1∈ y, we have y∩{s+1,…,|V|}=∅, thus ss(P)=l(r-l)(s-1).
We remark that the tightness cases in the proof of Theorem <ref> prevent Algorithm <ref> from exercising the last tie-breaking rule, which is the component that lets it improve diversity beyond the lower bound. We suspect that this bound might be overly pessimistic for instances where the image under f of the feasible set is small.
The result suggests that for uniform constraints, setting l=max{⌊ r(r_M-1)/(|V|-1)⌋,1} leads to Algorithm <ref> guaranteeing (1-1/|V|)(1-O(1/r_M)) approximation ratio in diversity, whereas l=⌊ r/2⌋ guarantees (r_M-1)/|V| approximation ratio for pathological matroid constraint. Additionally, if l/r is constant, then every output solution guarantees a constant approximation ratio in objective value.
Above results only consider extreme values (e.g. optimal f value). On the other hand, by comparing the algorithm's output against an arbitrary solution set, a more nuanced picture emerges which suggests the algorithm can exploit a certain feature in the global structure of f to lessen compromise on diversity (i.e. by lowering parameter l) while maintaining objective guarantees.
[][category=greedymonotone2uniformP,end]
Given monotone submodular f over 2^V, integers r≥2, l∈[1,r), K∈[1,|V|] and Y∈(2^V)^* such that m=max_v∈ Vn_v(Y), then for all P∈ℬ(f,𝒰_V,K,r,l)
min{m(r-1)h/l+|Y|,∑_y∈ Y|y|}min_x∈ Pf(x)≥∑_y∈ Yf(y),
where h=max{ l∑_y∈ Y|y|/[Km(r-1)],1}. This bound is nearly tight for all r≥2, l∈[1,r), size of Y and m∈[1,|Y|].
Let x be any solution in P, since n_v(Y)≤ m for all v∈ V, ∑_v∈ U_in_v(Y)≤ m|U_i|. For all i=1,…,|x|, W_i=∅ so by Lemma <ref>, ∑_v∈ U_in_v(Y)≤ m(r-1)(i-1)/l. If |x|<K, the same inequality holds for i=|x|+1, otherwise the inequality holds for i=|x|+1 iff h=1, so thus far the claim holds according to Lemma <ref>. Assuming h>0, then h'=∑_y∈ Y|y|-Km(r-1)/l>0, Lemma <ref> implies [m(r-1)/l+h'/K+|Y|]f(x)≥∑_y∈ Yf(y). Reformulating h' in h gives the desired expression.
For tightness, we fix k≥ m≥1, let w=⌈ k/m⌉ and
f(x)=max{ x_1,(1-ϵ/k)z_r+l}+∑_i=2^rx_i+∑_i=r+1^r+l-1z_i-ϵ x_1/klr(∑_i=2^rx_i)(∑_i=r+1^r+l-1z_i),
where z_i=max{x_jl+i}_j=0^w-1, f is monotone submodular for any ϵ∈[0,k]. Assuming w.l.o.g. that ϵ>0, v^*=1 and that the last tie-breaker prioritizes smallest labels, there must be one solution x' returned by Algorithm <ref> whose second element is in {r+1,…,r+l-1} while the rest's are contained in {2,…,r} due to the fourth tie-breaking rule. After adding the l-th element to each solution, x'^(l)={1,r+1,…,r+l-1}, meaning any element available to add to x' at this point induces a marginal gain less than 1 with ϵ>0. Since other solutions currently do not contain elements in {r+1,…,r+wl}, and elements in {2,…,r} can still be added to them, inducing marginal gains 1, the algorithm prioritizes them due to the second tie-breaking rule. When finally adding the l+1-th element to x', n_v(P)=l for all v∈{2,…,r}, so x' enjoys no further marginal gain, resulting in f(x')=l. For i=0,…,k-1, let y_i={(i w)l+j}_j=r+1^r+l, and let Y={ y_i∪{2,…,r}}_i=0^m-1∪{ y_i}_i=m^k-1, we have m=max_v∈ Vn_v(Y) and ∑_y∈ Yf(y)=m(r-1)+k(l-ϵ/k)=[m(r-1)/l+k]f(x')-ϵ=∑_y∈ Y|y|-ϵ. Thus the bound is nearly tight for K≥ kl^2/[m(r-1)]+l, in which case |x'|≥ l when the algorithm is run under K-rank uniform constraint.
[][category=greedymonotone2uniformPcol,end]
If there is Y∈(2^V)^k for some k≥1 where max_v∈ Vn_v(Y)<l∑_y∈ Y|y|/[K(r-1)] and ∑_y∈ Yf(y)/k≥αmax_|y|≤ Kf(y), then Algorithm <ref> under K-rank uniform constraint returns α/2-approximations with parameter l. If there is a set of k disjoint α-approximations, then Algorithm <ref> returns α/2-approximations at any l∈[(r-1)/k,r).
[][category=greedymonotone2matroidP,end]
Given monotone submodular f over 2^V, integers r≥2, l∈[1,r), matroid M=(V,ℐ) and Y∈ℐ^* such that m=max_v∈ Vn_v(Y), then for all P∈ℬ(f,ℐ,r,l)
min{m(r-1)/l+2|Y|,∑_y∈ Y|y|}min_x∈ Pf(x)≥∑_y∈ Yf(y).
This bound is nearly tight for all r≥2, l∈[1,r), size of Y and m∈[1,|Y|].
For any x∈ P, we see from Lemma <ref> that for all y∈ Y, |y∩ W_i|≤ i-1 for i=1,…,|x|+1. Therefore, ∑_v∈ W_in(Y)≤|Y|(i-1) for i=1,…,|x|+1, meaning ∑_v∈ U_in(Y)≤[m(r-1)/l+|Y|](i-1). This yields the claim as implied by Lemma <ref>.
For tightness, we fix k≥1 and m∈[1,k], let w=⌈ k/m⌉ and
f(x) =max{ x_1,(1-ϵ/k)z_l+1}+∑_i=2^lmax{x_i,z_i+l}+(1-ϵ x_1/klr)∑_i=(w+1)l+1^(w+2)lz_i
+∑_i=(w+2)l+1^(w+2)l+r-1x_i-ϵ x_1/klr(∑_i=2^lmax{x_i,z_i+l})(∑_i=(2w+1)l+1^(2w+1)l+r-1x_i),
where z_i=max{x_jl+i}_j=0^w-1, f is monotone submodular for any ϵ∈[0,k]. We construct the matroid M=(V,ℐ) where ℐ={x⊆ V:∑_i=1^lx_i+∑_i=1^wlx_(w+1)l+i≤ l}. Assuming w.l.o.g. that ϵ>0, v^*=1, it can be the case that there is one solution x' returned by Algorithm <ref> under M whose second element is in {2,…,l} while the rest's are contained in {(2w+1)l+1,…,(2w+1)l+r-1} due to the fourth tie-breaking rule. After adding the l-th element to each solution, we can assume that x'^(l)={1,…,l}, meaning any element available to add to x' at this point induces a marginal gain less than 1. Since other solutions currently do not contain elements in {2,…,(2w+1)l}, and elements in {(2w+1)l+1,…,(2w+1)l+r-1} can still be added to them, inducing marginal gains 1, the algorithm prioritizes them due to the second tie-breaking rule. Note that elements in {l+1,…,(2w+1)l} are ignored in the next r-1 iterations since they induce marginal gains less than 1 with ϵ>0. When finally adding the l+1-th element to x', n_v(P)=l for all v∈{(2w+1)l+1,…,(2w+1)l+r-1}, and elements in {(w+1)l+1,…,(2w+1)l} cannot be considered, so x' enjoys no further marginal gain, resulting in f(x')=l. For i=0,…,k-1, let y_i={(i w)l+j,[(i w)+w]l+j}_j=l+1^2l, we have y_i∪{(2w+1)l+1,…,(2w+1)l+r-1}∈ℐ. Let Y={ y_i∪{(2w+1)l+1,…,(2w+1)l+r-1}}_i=0^m-1∪{ y_i}_i=m^k-1, we have m=max_v∈ Vn_v(Y), and ∑_y∈ Yf(y)=m(r-1)+k(2l-ϵ/k)=[m(r-1)/l+2k]f(x')-ϵ=∑_y∈ Y|y|-ϵ.
[][category=greedymonotone2matroidPcol,end]
Given a matroid M=(V,ℐ), if there is Y∈ℐ^k for some k≥1 where max_v∈ Vn_v(Y)≤ lk/(r-1) and ∑_y∈ Yf(y)/k≥αmax_y∈ℐf(y), then Algorithm <ref> under matroid constraint M returns α/3-approximations with parameter l. If there is a set of k disjoint α-approximations, then Algorithm <ref> returns α/3-approximations at any l∈[(r-1)/k,r) and α/(2+1/k)-approximations at l=r-1.
Going further, these bounds can be strictly improved when the number of disjoint optimal solutions exceeds certain thresholds. In particular, we show that in such cases, Algorithm <ref> guarantees objective values identical to those from the classical greedy when maximizing monotone submodular functions under the same constraints <cit.>. For a function f and a solution set S let D(f,S,α) be the largest number of disjoint non-empty α-approximations of f over S, and for a solution x, let i_x be its size before it stops being improved by the algorithm.
[][category=greedymonotone2disjoint,end]
Given monotone submodular f over 2^V, integers r≥2, l∈[1,r), and matroid M=(V,ℐ), then for all P∈ℬ(f,ℐ,r,l), given x∈ P where |x|>1 and D(f,ℐ,α)>⌊η(r-1)/l⌋ for some α, then
* f(x)≥α[1-(1-1/|x|)^|x|]max_y∈ℐf(y) if M is uniform and η=|x|-1,
* f(x)≥αmax_y∈ℐf(y)/2 if M is non-uniform and η=i_x.
If D(f,ℐ,α)=⌊η(r-1)/l⌋, the bound does not necessarily hold in either case.
Let Y be the set of such α-approximations. In uniform case Lemma <ref> and the pigeonhole principle imply that there must be a solution y∈ Y where y∩ V_i∖ x^(i-1)=∅ for all i=1,…,|x|. Since M is uniform, W_i∖ x^(i-1)=∅ for all i=1,…,|x|, so no element in y is discarded when building x, meaning every greedy marginal improvement to x is at least as good as improvement by adding any element in y at corresponding steps. This means classical greedy improvement arguments hold, giving
f(y∪ x^(i))-f(x^(i))≤∑_v∈ y[f(x^(i)∪{v})-f(x^(i))]≤1/|y|[f(x^(i+1))-f(x^(i))].
From this, we see the distance between f(y) and f value of partial x is multiplied by at most 1-1/|y| in each step, leading to the final bound f(x)≥[1-(1-1/|y|)^|x|]f(y). If |y|>|x| and there is no 1-size solution in Y, |V|≥(|x|-1)(r+l-1)/l+⌊(|x|-1)(r-1)/l⌋+1. As |x|>1, this means V has enough elements for the algorithm to make x (|x|+1)-size, so either |y|≤|x| or Y contains a 1-size solution. The former case implies the ratio is lower bounded by 1-(1-1/|x|)^|x|, while the latter gives f(x)≥αmax_z∈ℐf(z), both satisfying the first inequality.
In non-uniform case, we have y∩ V_i_x+1=∅, and |y∩ W_i|≤ i-1 for all i=1,…,|x|+1 by Lemma <ref>. So by Lemma <ref>, f(x)=f(x^(i_x))≥ f(y)/2, yielding the second inequality.
We show the negative claim by construction. For K-rank uniform constraint, let h=⌊(K-1)(r+l-1)/l⌋, λ=1-1/K,
f^*(x)=1-λ^-s_x+λ^-s_x/K∑_i=0^h-Kx_K+i+(λ^-s_x/K-ϵ x_1)∑_i=1^K-1z_h+i-ϵ∑_i=2^K-1x_i∑_i=0^h-Kx_K+i,
where z_i=max{x_i+j(K-1)}_j=0^h-K and s_x=∑_i=1^K-1x_i, and f(x)=min{f^*(x),1}, f is monotone submodular with sufficiently small positive ϵ (ϵ<λ^K/(hK^2) suffices). Assuming K>2, f({1})=1/K=max_vf({v}), so we can assume v^*=1. For the second element, we see that f({1,i})=1-λ^2 for i=2,…,h and f({1,i})≤1-λ^2-ϵ for i>h, so we can assume Algorithm <ref> adds elements in {K,…,h} as second elements to r-1 solutions, while one solution has its second element in {2,…,K-1}. Let z be this solution, the next K-3 elements added to z must be in {2,…,K-1} while the other solutions have subsequent elements in {K,…,h} with ϵ>0 due to the second tie-breaking rule. When adding the last element to z, elements in {K,…,h} cannot be considered since they must first be added to other solutions due to the second tie-breaking rule until there are l solutions containing each of them, and there are only enough of them to finalize up to r-1 solutions. Thus, f(z)=1-λ^K-ϵ and |z|=K. For i=0,…,h-K, let y_i={K+i}∪{h+i(K-1)+j}_j=1^K-1, y_i are disjoint and f(y_i)=1>f(z)/(1-λ^K), so the bound does not hold. Note that we cannot get more than h-K+1 disjoint solutions to this instance with objective values 1.
For non-uniform case, let K>2, h=⌊ K(r+l-1)/l⌋, q=h+K⌊ K(r-1)/l⌋,
f(x)=max{x_1,(1-ϵ)z_q+1,K-1}+∑_i=2^Kmax{x_i,z_q+i}+∑_i=K^hx_i+∑_i=1^Kz_h+i-ϵ x_1/2hK(∑_i=2^Kmax{x_i,z_q+i}+∑_i=1^Kz_h+i)∑_i=K^hx_i,
where z_i=max{x_i+j(K-1)}_j=0^h-K, f is monotone submodular with sufficiently small positive ϵ (ϵ<1/h suffices). Finally, let ℐ={ x⊆ V:∑_i=1^Kx_i+∑_i=h+1^qx_i≤ K}. Again, we can assume v^*=1, and r-1 solutions have their second elements in {K+1,…,h} while one solution z has it in {2,…,K}; the same goes for their next K-3 elements due to the second tie-breaking rule and ϵ>0. We see that the algorithm can add an element in {2,…,K} to z^(K-1), followed by adding elements in {K+1,…,h} to other solutions until each is contained in l solutions. At this point, the matroid constraint prevent elements in {K,…,q} from being added to z, so no further improvement occurs, leading to f(z)=K=i_z. For i=0,…,h-K-1, let y_i={K+i+1}∪{h+iK+j,q+iK+j}_j=1^K, y_i are disjoint, y_i∈ℐ and f(y_i)=2K+1-ϵ>2f(z) given that ϵ<1, so the bound does not hold. Note that we cannot get more than h-K disjoint solutions to this instance with objective values greater than 2K.
We include a simple observation relating maximum diversity and the number of disjoint approximations.
Let k=D(f,S,α) for some function f, solution set S⊆2^V, threshold α, then the maximum ss value to a (f,S,r,α)-instance is at most g(h,s,r) where h=min{r(s-1)+k,|V|} and s=max_x∈ S|x|.
§ EXPERIMENTAL INVESTIGATION
To observe how these algorithms perform on concrete instances, we experiment with the maximum vertex coverage problem: given a graph G=(V,E), find a set x∈ℐ for some matroid M=(V,ℐ) that maximizes |x∪{v∈ V:∃ u∈ x,{u,v}∈ E}|, which is monotone submodular. For the benchmark instance, we use the complement of frb30-15-1, frb30-15-2, frb35-17-1, frb40-19-1 from the standard benchmark suite BHOSLIB created using the Model RB <cit.>, containing 450, 450, 595, 760 vertices respectively, and 17827, 17874, 27856, 41314 edges respectively; these are available at <cit.>. and licensed by CC BY-SA. The experiments were run on a hexa-core 2.2 GHz Intel i7 CPU with 16 GB of RAM.
We use four matroid constraints in the experiments, including 2 uniform matroids and 2 partition matroids. As mentioned, partition matroids admit a independence collections of the form ℐ={x⊆ V:∀ i=1,…,k,|x∩ V_i|≤ b_i} for some partitioning {V_i}_i=1^k of V and integers {b_i}_i=1^k. These are useful in modeling group-based budget constraints <cit.>. For uniform matroids, we set the ranks to {10,15}, and denote them with U10 and U15, respectively (the numbers represent the ranks). For partition matroids, we group consecutive vertices sorted by degrees into 10 partitions, i.e. V_i contains from |V|(i-1)/10+1-th to |V|i/10-th smallest degree vertices. This is to force the solutions to include a limited number of high-degree vertices, creating scenarios where the greedy algorithm would obtain very different solutions from the ones it would return under uniform constraints. In case 10 does not divide |V|, we set |V_i|=⌊|V|/10⌋ for i=2,…,10. For the first partition matroid (denoted P10), we set b_i to 1 for all i, while for the second (denoted P15), we assign 6 to b_1 and 1 to the rest.
For each of the 16 instances, we run with r∈{20,100}, Algorithm <ref> with all parameter values b∈[0,r_M], and Algorithm <ref> with all parameter values l∈[1,r]. For both algorithms, the last tie-breaking is done by selecting the first choice (in increasing order of vertex labels). Therefore, there is no randomization, so each algorithm is run once on each instance with each parameter value.
To contextualize the results, we obtain a best known coverage for each instance using the built-in integer linear programming solver in MATLAB. Furthermore, the upper bounds on ss values are given by ∑_ig(|V_i|,b_i,r) since ss can be decomposed by disjoint subsets (in case of uniform matroid, V_i V and b_i r_M). Note that this bound applies to all threshold ratio α∈[0,1], the actual optimal value might very well be much smaller, especially for α close to 1. We choose not to normalize our results against actual optimal values because [label=*)]
* solving exactly the problem (<ref>) is prohibitively costly, and
* this is exacerbated by the large number of α values for each of which an optimal value needs to be obtained (i.e. the number of distinct minimum objective values from the algorithms on each instance).
The results are shown in Figure <ref> and <ref>, which visualize, for each graph-constraint-parameter combination, the mean and minimum objective values in the output, and the ss value. We see that the objective values in the outputs are high (within 5% gap of the optimal) for r=20, and predictably degrade for r=100 (about 30%), although the mean values stay within 10% in most settings. Notably, Algorithm <ref> produces higher minimum objective values than Algorithm <ref> does in most settings, and smaller gaps between mean values and minimum values. More importantly, Algorithm <ref> achieves significantly higher ss values in most settings, thus yielding better objective-diversity trade-offs than Algorithm <ref>. This indicates benefits of limiting common elements by controlling their representation in the output, as they do not contribute to diversity.
Interestingly, increasing the output size r only seems to affect the objective values, as the relative diversity values are virtually the same across all settings. We suspect that this might change for more complex matroids. Incidentally, the impacts of r on objective values from Algorithm <ref> seem minimal outside of edge cases (i.e. l=1).
§ CONCLUSION
The diverse solutions problem is a challenging extension to optimization problems that is of practical and theoretical interests. In this work, we considered the problem of finding diverse solutions to maximizing monotone submodular functions over a matroid. To address the difficulty in finding multiple high-quality solutions, we exploited submodularity with two simple greedy algorithms, equipped with objective-diversity trade-off adjusting parameters. Theoretical guarantees by these algorithms were given in both objective values and diversity, as functions of their respective parameters. Our experimental investigation with maximum vertex coverage instances demonstrates strong empirical performances from these algorithms despite their simplicity.
§ ON ALGORITHM <REF> UNDER MATROID INTERSECTION
We can generalize Theorem <ref> and <ref> to matroid intersection setting by extending the arguments. A set x∈ V satisfies the matroid intersection constraint defined by matroids M_i=(V,ℐ_i) for i=1,…,k if x∈⋂_i=1^kℐ_i. Let W_i,j=cl_M_j(x^(i-1)), U_i,j=V_i∪ W_i,j∖ x^(i-1), U_i=⋃_j=1^kU_i,j and r_j(·)=r_M_j(·). We remark that matroid intersection is downward-closed, meaning all subsets of a feasible set are feasible.
[][category=greedymonotone2kmatroid,end]
Given monotone submodular f over 2^V, k≥1 matroids M_i=(V,ℐ_i) such that ℐ=⋂_jℐ_j⊃{∅}, integers r≥2 and l∈[1,r), Algorithm <ref> run with inputs f, ℐ, r, l outputs P such that
min{r-1/l+k+1,max_z∈ℐ|z|}min_x∈ Pf(x)≥max_y∈ℐf(y).
This bound is nearly tight for all r≥2, l∈[1,r), and k≥1.
For any x∈ P, we have that since y∈ℐ_j for all j=1,…,k, we can use Lemma <ref> for each of the k matroids, and Lemma <ref> to derive for all i=1,…,|x|+1
|y∩ U_i| ≤|y∩ V_i∖ x^(i-1)|+∑_j=1^k|y∩ W_i,j∖ x^(i-1)|≤|y∩ V_i∖ x^(i-1)|+∑_j=1^kr_j(W_i,j∖ x^(i-1))≤[(r-1)/l+k](i-1)
Applying Lemma <ref> yields the claim.
For tightness, let
f(x) =max{ x_1,(1-ϵ x_l+1/k)}+∑_i=2^lmax{x_i,x_i+l}+(1-ϵ x_1/klr)∑_i=2l+1^(k+2)lx_i
+∑_i=(k+2)l+1^(k+2)l+r-1x_i-ϵ x_1/klr(∑_i=2^lmax{x_i,x_i+l})(∑_i=3l+1^3l+r-1x_i),
observe f is monotone submodular for any ϵ∈[0,1]. For i=1,…,k, let matroid M_i=(V,ℐ_i) be where ℐ_i={x⊆ V:∑_j=1^l(x_j+x_j+(i+1)l)≤ l}. Assuming w.l.o.g. that ϵ>0, v^*=1, it can be the case that there is one solution x' returned by Algorithm <ref> under M whose second element is in {2,…,l} while the rest's are contained in {(k+2)l+1,…,(k+2)l+r-1} due to the fourth tie-breaking rule. After adding the l-th element to each solution, we can assume that x'^(l)={1,…,l}, meaning any element available to add to x' at this point induces a marginal gain less than 1. Since other solutions currently do not contain elements in {2,…,(k+2)l}, and elements in {(k+2)l+1,…,(k+2)l+r-1} can still be added to them, inducing marginal gains 1, the algorithm prioritizes them due to the second tie-breaking rule. Note that elements in {l+1,…,(k+2)l} are ignored in the next r-1 iterations since they induce marginal gains less than 1 with ϵ>0. When finally adding the l+1-th element to x', n_v(P)=l for all v∈{(k+2)l+1,…,(k+2)l+r-1}, and elements in {2l+1,…,(k+2)l} cannot be considered, so x' enjoys no further marginal gain, resulting in f(x')=l. We have y={l+1,…,(k+2)l+r-1}∈⋂_i=1^kℐ_k and f(y)=r+l-1+k(l-ϵ/k)=[(r-1)/l+k+1]f(x')-ϵ.
[][category=greedymonotone2kmatroidP,end]
Given monotone submodular f over 2^V, integers r≥2, l∈[1,r), k≥1 matroids M_i=(V,ℐ_i) where ℐ=⋂_jℐ_j⊃{∅}, and Y∈ℐ^* such that m=max_v∈ Vn_v(Y), Algorithm <ref>, run with inputs f, ℐ, r, l outputs P such that
min{m(r-1)/l+(k+1)|Y|,∑_y∈ Y|y|}min_x∈ Pf(x)≥∑_y∈ Yf(y).
This bound is nearly tight for all r≥2, l∈[1,r), k≥1, size of Y and m∈[1,|Y|].
For any x∈ P, we have |V_i∖ x^(i-1)|≤ (r-1)(i-1)/l for i=1,…,|x|+1, so by the definition of m, ∑_v∈ V_i∖ x^(i-1)n(Y)≤ m(r-1)(i-1)/l. From the proof of Theorem <ref>, we have for i=1,…,|x|+1,
∑_j=1^k∑_v∈ U_in(Y)≤∑_v∈ V_i∖ x^(i-1)n(Y)+∑_y∈ Y∑_j=1^k|y∩ W_i,j∖ x^(i-1)|≤[m(r-1)/l+k|Y|](i-1).
Applying Lemma <ref> yields the claim.
For tightness, we fix q≥1 and m∈[1,q], let w=⌈ q/m⌉, o=[w(k+1)+1]l and
f(x) =max{ x_1,(1-ϵ z_l+1/qk)}+∑_i=2^lmax{x_i,z_i+l}+(1-ϵ x_1/qklr)∑_i=(w+1)l+1^(w+2)ls_i
+∑_i=o+1^o+r-1x_i-ϵ x_1/qklr(∑_i=2^lmax{x_i,z_i+l})(∑_i=o+1^o+r-1x_i),
where z_i=max{x_jl+i}_j=0^w-1 and s_i=∑_j=0^k-1z_wlj+i, f is monotone submodular for any ϵ∈[0,q]. For i=1,…,k, let matroid M_i=(V,ℐ_i) be where ℐ_i={x⊆ V:∑_j=1^lx_j+∑_j=1^wlx_j+(iw+1)l≤ l}. Assuming w.l.o.g. that ϵ>0, v^*=1, it can be the case that there is one solution x' returned by Algorithm <ref> under M whose second element is in {2,…,l} while the rest's are contained in {o+1,…,o+r-1} due to the fourth tie-breaking rule. After adding the l-th element to each solution, we can assume that x'^(l)={1,…,l}, meaning any element available to add to x' at this point induces a marginal gain less than 1. Since other solutions currently do not contain elements in {2,…,o}, and elements in {o+1,…,o+r-1} can still be added to them, inducing marginal gains 1, the algorithm prioritizes them due to the second tie-breaking rule. Note that elements in {l+1,…,o} are ignored in the next r-1 iterations since they induce marginal gains less than 1 with ϵ>0. When finally adding the l+1-th element to x', n_v(P)=l for all v∈{o+1,…,o+r-1}, and elements in {(w+1)l+1,…,o} cannot be considered, so x' enjoys no further marginal gain, resulting in f(x')=l. For i=0,…,q-1, let y_i=⋃_h=1^k{(i w)l+j,[(i w)+hw]l+j}_j=l+1^2l, we have y_i∪{o+1,…,o+r-1}∈⋂_i=1^kℐ_k. Let Y={ y_i∪{o+1,…,o+r-1}}_i=0^m-1∪{ y_i}_i=m^q-1, we have m=max_v∈ Vn_v(Y), and ∑_y∈ Yf(y)=m(r-1)+ql+qk(l-ϵ/(qk))=[m(r-1)/l+(k+1)q]f(x')-ϵ.
§ ACKNOWLEDGEMENTS
This work was supported by the Australian Research Council through grants DP190103894 and FT200100536.
ieeetr
|
http://arxiv.org/abs/2307.03952v2 | 20230708110202 | Is ChatGPT a Good Personality Recognizer? A Preliminary Study | [
"Yu Ji",
"Wen Wu",
"Hong Zheng",
"Yi Hu",
"Xi Chen",
"Liang He"
] | cs.CL | [
"cs.CL"
] |
1
.001
Is ChatGPT a Good Personality Recognizer? A Preliminary Study
Yu Ji et al.
mode = title]Is ChatGPT a Good Personality Recognizer? A Preliminary Study
1,2]Yu Ji[orcid=0000-0001-6048-9184]
[email protected]
2,3]Wen Wu[orcid=0000-0002-2132-5993]
[1]
[email protected]
[1]Corresponding author
4]Hong Zheng
3]Yi Hu
3]Xi Chen
1,2]Liang He
[1]organization=Institute of AI Education, East China Normal University,
city=Shanghai,
country=China
[2]organization=School of Computer Science and Technology, East China Normal University,
city=Shanghai,
country=China
[3]organization=Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University,
city=Shanghai,
country=China
[4]organization=Shanghai Changning Mental Health Center,
city=Shanghai,
country=China
In recent years, personality has been regarded as a valuable personal factor being incorporated into numerous tasks such as sentiment analysis and product recommendation. This has led to widespread attention to text-based personality recognition task, which aims to identify an individual's personality based on given text. Considering that ChatGPT has recently exhibited remarkable abilities on various natural language processing tasks, we provide a preliminary evaluation of ChatGPT on text-based personality recognition task for generating effective personality data. Concretely, we employ a variety of prompting strategies to explore ChatGPT's ability in recognizing personality from given text, especially the level-oriented prompting strategy we designed for guiding ChatGPT in analyzing given text at a specified level. The experimental results on two representative real-world datasets reveal that ChatGPT with zero-shot chain-of-thought prompting exhibits impressive personality recognition ability and is capable to provide natural language explanations through text-based logical reasoning. Furthermore, by employing the level-oriented prompting strategy to optimize zero-shot chain-of-thought prompting, the performance gap between ChatGPT and corresponding state-of-the-art model has been narrowed even more. However, we observe that ChatGPT shows unfairness towards certain sensitive demographic attributes such as gender and age. Additionally, we discover that eliciting the personality recognition ability of ChatGPT helps improve its performance on personality-related downstream tasks such as sentiment classification and stress prediction.
ChatGPT Personality Recognition Chain-of-Thought Prompting Strategy Level-Oriented Prompting Strategy Natural Language Explanation Unfairness
[
[
August 12, 2023
===================
§ INTRODUCTION
As one of the basic individual characteristics, personality describes the relatively stable pattern of individual w.r.t. her/his behavior, thought, and emotion <cit.>. In recent years, an increasing number of researchers have considered personality as a valuable factor and incorporated it into various tasks (e.g., machine translation <cit.>, product recommendation <cit.>, sentiment analysis <cit.>, and mental health analysis <cit.>), resulting in significant performance improvements. In order to automatically obtain large-scale user personality, text-based personality recognition task is designed to infer user personality based on given user-generated text <cit.>. With the rapid developments of pre-trained Large Language Models (LLMs) (e.g., BERT <cit.>, RoBERTa <cit.>, GPT-3 <cit.>,PaLM <cit.>, and LLaMA <cit.>), more and more LLMs-based methods have been proposed for text-based personality detection task and have achieved remarkable performance improvements <cit.>.
More recently, ChatGPT[https://chat.openai.com/] has attracted a considerable amount of attention with its impressive general language processing ability <cit.>, sparking exploration into its capability boundaries <cit.>. Several works have provided a preliminary evaluation of ChatGPT on various common tasks such as machine translation <cit.>, product recommendation <cit.>, sentiment analysis <cit.>, and mental health analysis <cit.>. Therefore, in this work, we are interested in evaluating the performance of ChatGPT on text-based personality recognition task for generating effective personality data. We also would like to see whether eliciting the personality recognition ability of ChatGPT contributes to improving its performance on other downstream tasks. Concretely, we raise the following Research Questions (RQs):
RQ1: How do different prompting strategies affect ChatGPT's ability to identify personality?
RQ2: How unfair is ChatGPT when serving as a personality recognizer on various sensitive demographic attributes?
RQ3: Does the personality inferred by ChatGPT help improve its performance on other downstream tasks?
To answer these research questions, we conduct experiments on two representative text-based personality recognition datasets (i.e., Essays and PAN) to compare the performance of ChatGPT, traditional neural network (e.g., Recurrent Neural Network (RNN)), fine-tuned RoBERTa, and corresponding State-Of-The-Art (SOTA) model. Specifically, we adopt three classic prompting strategies to elicit the personality recognition ability of ChatGPT, including zero-shot prompting, zero-shot Chain-of-Thought (CoT) prompting, and one-shot prompting. Furthermore, considering that researchers typically analyze texts at different levels (e.g., word level, sentence level, and document level) to obtain valuable text information <cit.>, we design zero-shot level-oriented CoT prompting to guide ChatGPT in analyzing given text at a specified level, thereby gaining a more targeted understanding of given text and recognizing personality more precisely. According to the experimental results, our findings can be summarized as follows:
(1) Among the three classic prompting strategies, zero-shot CoT prompting can better elicit ChatGPT's ability to predict personality based on given text, resulting in its optimal overall performance on the two datasets, although there is still a certain gap in performance compared to the SOTA model. Additionally, ChatGPT with zero-shot CoT prompting could generate more natural language explanations by text-based logical reasoning, enhancing the interpretability of the prediction results. Furthermore, with the assistance of zero-shot level-oriented CoT prompting, ChatGPT could perform more targeted text analysis, enabling it to complete more accurate personality prediction.
(2) ChatGPT exhibits unfairness to some sensitive demographic attributes on text-based personality recognition task. Based on ChatGPT's analysis, the woman group is more likely to have high levels of Openness, Conscientiousness, and Agreeableness when compared to the man group. Besides, relative to the younger group, the elderly group has a higher likelihood to have low Openness.
(3) The personality inferred by ChatGPT could enhance its performance on sentiment classification task and stress prediction task, which may provide new insights for other personality-related tasks (e.g., machine translation and product recommendation).
In the following sections, we first introduce related work regarding personality recognition in Section <ref>. After that, we present the details of our experimental design and analyze the experimental results in Section <ref>. Finally, we conclude the paper and indicate some future directions in Section <ref>.
§ BACKGROUND AND RELATED WORK
Big-Five Factor (BFF) model and Myers-Briggs Type Indicator (MBTI) are two most popular personality assessment models <cit.>. To be specific, BFF model describes personality based on five traits: Openness (O), Conscientiousness (C), Extraversion (E), Agreeableness (A), and Neuroticism (N) <cit.>. Table <ref> shows the propensities of individuals under different personality traits and levels. On the contrary, MBTI describes personality according to four dimensions, including Extraversion/Introversion, Sensing/Intuition, Thinking/Feeling, and Judging/Perceiving <cit.>. Compared to BFF model, MBTI still faces controversy within the academic community <cit.>. Hence, we adopt BFF model to describe individuals' personalities in this paper.
In recent years, an increasing number of researchers regarded Big-Five personality as a valuable personal factor and incorporated it into their models, resulting in significant performance improvements on various tasks <cit.>. For example, Wu et al. <cit.> adopted users' Big-Five personalities to personalize the recommendation diversity being tailored to the users' diversity needs. Ban et al. <cit.> utilized learners' Big-Five personalities to model the individual differences for better predicting the learners' knowledge levels. This has sparked researchers' interest in efficiently acquiring Big-Five personalities.
The conventional approach to identify an individual's Big-Five personality is via personality questionnaires (e.g., NEO-FFI questionnaire <cit.>, BFI-44 <cit.>, BFI-10 <cit.>, and BFMS <cit.>). These personality questionnaires are typically carefully designed by psychology experts and require individuals to rate their behaviors using Likert scales, which is time-consuming and labor-intensive <cit.>. In order to apply Big-Five personality on a large scale across various domains (e.g., machine translation <cit.>, product recommendation <cit.>, sentiment analysis <cit.>, and mental health analysis <cit.>), researchers attempted to implicitly obtain Big-Five personality from various User-Generated Content (UGC), including text <cit.>, handwriting <cit.>, speech <cit.>, electroencephalography (EEG) <cit.>, and so on. Due to substantial evidence from psychological research demonstrating the correlation between user-generated texts and users' Big-Five personalities <cit.>, researchers made an extensive exploration of text-based personality recognition. However, the related methods normally regarded text-based personality recognition task as a special case of text classification. Most of them utilized machine learning algorithms to build personality recognizers with text features such as Linguistic Inquiry and Word Count (LIWC) <cit.> and Structured Programming for Linguistic Cue Extraction (SPLICE) <cit.>. Furthermore, with the rapid development of deep learning, more and more methods using deep neural networks are proposed to solve text-based personality recognition task, as deep neural networks could extract high-order text features from user-generated text automatically <cit.>. For example, Majumder et al. <cit.> designed a deep convolutional neural network with Word2Vec embeddings <cit.> for personality detection. Xue et al. <cit.> presented a two-level hierarchical neural network to learn the deep semantic representations of users' posts for recognizing users' Big-Five personalities. Lynn et al. <cit.> utilized message-level attention to learn the relative weight of users' posts for assessing users' Big-Five personalities. Zhu et al. <cit.> learned post embeddings by contrastive graph transformer network for personality detection. Zhu et al. <cit.> proposed a lexical psycholinguistic knowledge-guided graph neural network to enrich the semantics of users' posts with the personality lexicons. Recently, the remarkable performance enhancements achieved by LLMs in numerous Nature Language Processing (NLP) tasks <cit.> prompted researchers to explore the utilization of LLMs in text-based personality prediction task <cit.>. For example, Mehta et al. <cit.> performed extensive experiments with BERT to arrive at the optimal configuration for personality detection. Ren et al. <cit.> leveraged BERT to generate sentence-level embedding for personality recognition, while a sentiment dictionary is used to consider sentiment information in the process of personality prediction.
Lately, the release of ChatGPT has drawn increasingly great attention due to the incredible general language processing ability of ChatGPT. Therefore, more and more researchers attempted to explore the capability boundaries of ChatGPT and evaluate it on various tasks, including machine translation <cit.>, product recommendation <cit.>, sentiment analysis <cit.>, mental health analysis <cit.>, and so on. Hence, in this work, we are interested in exploring the personality recognition ability of ChatGPT through different prompting strategies for obtaining effective personality data.
§ EXPERIMENTS
§.§ Datasets
We adopt two well-known publicly available datasets in our experiments for text-based Big-Five personality recognition task:
(1) Essays <cit.>: This stream-of-consciousness dataset consists of 2,467 essays written by psychology students, and the Big-Five personality levels (i.e., low and high levels) of the students were acquired through standardized self-report questionnaire.
(2) PAN[https://pan.webis.de/clef15/pan15-web/author-profiling.html]: This dataset comes from the PAN2015 data science competition, which consists of four language sub-datasets (i.e., Dutch, English, Italian, and Spanish). In this work, we choose the English sub-dataset, which contains 294 users' tweets and their Big-Five personality scores. The Big-Five personality scores of the users were obtained by BFI-10 questionnaire <cit.>. Note that, similar to <cit.>, for each of the five personality traits, we adopt the corresponding mean value to convert personality scores into two personality levels (i.e., low and high levels). To be specific, personality score below the corresponding mean value is converted into the low level, while personality score equal to or above the corresponding mean value is converted into the high level.
Similar to <cit.>, we randomly split Essays and PAN datasets into training, validation, and testing sets in the proportion of 8:1:1. The statistics of the two datasets are summarized in Figure <ref>.
§.§ Prompting Strategies
We employ three classic prompting strategies to explore the personality recognition ability of ChatGPT, including zero-shot prompting, zero-shot CoT prompting, and one-shot prompting. The reason for using one-shot prompting alone is that ChatGPT has a limitation on the length of input. Considering that the texts in both Essays and PAN datasets are normally long (i.e., the average lengths of texts in Essays and PAN datasets are 749 and 1,405 respectively), we only provide one demonstration example in the input (i.e., one-shot prompting) without offering more demonstration examples (e.g., two-shot prompting). In addition, inspired by existing NLP research mining valuable text information at different levels (e.g., word level, sentence level, and document level) <cit.>, we design level-oriented prompting strategy to guide ChatGPT in analyzing text at a specified level. Concretely, we combine the level-oriented prompting strategy with zero-shot CoT prompting to construct zero-shot level-oriented CoT prompting. The reason for constructing zero-shot level-oriented CoT prompting based on zero-shot CoT prompting is that ChatGPT with zero-shot CoT prompting has better overall performance on the two datasets when compared to zero-shot prompting and one-shot prompting (see Section <ref>). Hence, we would like to see whether the level-oriented prompting strategy could further enhance the effectiveness of zero-shot CoT prompting. Note that, the four prompting strategies require ChatGPT to simultaneously output the person's levels of five personality traits (i.e., O, C, E, A, and N) based on given text.
(1) Zero-Shot prompting
Analyze the person-generated text, determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High.
Text: "[Text]"
Level:
(2) Zero-Shot CoT prompting
Analyze the person-generated text, determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High.
Text: "[Text]"
Level: Let's think step by step:
(3) One-Shot prompting
Analyze the person-generated text, determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High.
Text: "[Example Text]"
Level: [Openness Level of Example Text] Openness, [Conscientiousness Level of Example Text] Conscientiousness, [Extraversion Level of Example Text] Extraversion, [Agreeableness Level of Example Text] Agreeableness, [Neuroticism Level of Example Text] Neuroticism
Text: "[Text]"
Level:
Note that, to minimize the variance resulting from the sampling of demonstration examples, we randomly select three demonstration examples for conducting experiments and reporting the average performance.
(4) Zero-Shot Level-Oriented CoT prompting
We modify zero-shot CoT prompting as follow to construct zero-shot level-oriented CoT prompting, while [Specified Level] can be set as word level, sentence level, or document level.
Analyze the person-generated text from [Specified Level], determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High.
Text: "[Text]"
Level: Let's think step by step:
§.§ Baselines
Based on our literature research, we choose the following representative models as baselines:
(1) RNN <cit.>: uses RNN to generate text representation for recognizing Big-Five personality. In addition, the pre-trained GloVe model <cit.> is used to initialize the word embeddings.
(2) RoBERTa <cit.>: fine-tunes pre-trained RoBERTa-Base model and utilizes the representation of [CLS] with a linear layer for personality classification.
(3) HPMN (BERT) <cit.>: is one of the SOTA personality prediction models, which uses the personality lexicons to incorporate relevant external knowledge for enhancing the semantic meaning of the person-generated text. Its performance on Essays and PAN datasets is quoted from the original paper.
§.§ Evaluation Metrics
It can be observed from Figure <ref> that Essays and PAN datasets maintain class balance across most of the five personality traits. Therefore, we use Accuracy (the higher the better) <cit.> as the evaluation metric, which is used to measure the personality classification performance. Besides, to make a more intuitive comparison, we adopt Accuracy Improvement Percentage (AIP) to measure the accuracy improvement percentage of ChatGPT against the SOTA model (i.e., HPMN (BERT)), which is calculated as:
AIP=Accuracy_testmodel-Accuracy_SOTA/Accuracy_SOTA*100%
where Accuracy_SOTA and Accuracy_testmodel denote the accuracy of the SOTA model and the test model such as ChatGPT with zero-shot prompting.
§.§ Implementation Details
For the usage of ChatGPT, we adopt the representative version of ChatGPT (i.e., gpt-3.5-turbo). In addition, we set the temperature to 0 for producing more deterministic and focused responses. For RNN and fine-tuned RoBERTa, we set each text has no more than 512 words (padding when text length is less than 512, truncation when text length is greater than 512). Besides, for RNN, the dimension of hidden state, the batch size, and the learning rate are set to 128, 32, and 1e-3 respectively. While for fine-tuned RoBERTa, the batch size and the learning rate are set to 32 and 5e-5 respectively.
§.§ Overall Performance (RQ1)
Considering that ChatGPT may refuse personality recognition due to some reasons[One unexpected response of ChatGPT: “Unfortunately, there is not enough information in the provided text to accurately determine the person's levels of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.".], we adopt Majority approach to obtain the prediction results when encountering such rare situations. Specifically, for each personality trait, we regard the majority personality level in training set as the personality level of each sample in testing set. The experimental results on Essays and PAN datasets are shown in Table <ref> and Table <ref>. Concretely, ChatGPT_ZS, ChatGPT_CoT, and ChatGPT_OS represent ChatGPT with zero-shot prompting, zero-shot CoT prompting, and one-shot prompting. In addition, ChatGPT_CoT_W, ChatGPT_CoT_S, and ChatGPT_CoT_D denotes ChatGPT with zero-shot level-oriented CoT prompting, while [Specified Level] is set to word level, sentence level, and document level respectively.
Results of zero-shot prompting. As shown in Table <ref> and Table <ref>, ChatGPT_ZS has better performance than the traditional neural network RNN on both Essays and PAN datasets. For example, relative to RNN, ChatGPT_ZS increases its average classification accuracy from 50.3% to 57.4% on Essays dataset. Furthermore, ChatGPT_ZS not only performs comparably to fine-tuned RoBERTa on Essays dataset (e.g., 57.4% vs. 57.3% in terms of average classification accuracy) but also outperforms fine-tuned RoBERTa on PAN dataset (e.g., 57.3% vs. 55.3% w.r.t. average classification accuracy). Therefore, ChatGPT_ZS exhibits incredible text-based personality recognition ability under zero-shot setting. Since the SOTA model is a task-specific fully-supervised model with complex architecture for personality recognition task, the performance of ChatGPT_ZS falls far behind that of the SOTA model on the two datasets (e.g., 57.3% vs. 67.5% w.r.t. average classification accuracy on PAN dataset). However, another interesting observation is that compared with Essays dataset (i.e., the relatively large-scale dataset), ChatGPT_ZS shows a relatively higher AIP on PAN dataset (i.e., the relatively small-scale dataset). For example, the AIP of ChatGPT_ZS against the SOTA model on Essays and PAN datasets are -29.0% and -15.1% respectively. Furthermore, ChatGPT_ZS even surpasses the SOTA model when predicting personality trait A on PAN dataset (i.e., 70.0% vs. 66.3%). The possible reason is that PAN dataset provides relatively fewer training data for the fully-supervised SOTA model, preventing it from fully learning the differences in personality levels. In contrast, ChatGPT_ZS does not require training data and relies solely on its existing knowledge under zero-shot setting, narrowing the performance gap between ChatGPT_ZS and the SOTA model.
Results of zero-shot CoT prompting. Table <ref> and Table <ref> reveal that zero-shot CoT prompting could effectively enhance ChatGPT's ability on text-based personality recognition task. For example, ChatGPT_CoT increases its average classification accuracy from 57.3% to 60.7% on PAN dataset when compared with ChatGPT_ZS. As for reason, with the help of zero-shot CoT prompting, ChatGPT_CoT can perform more complex logical reasoning, so as to accurately complete the personality prediction task. Besides, ChatGPT_ZS only provides final prediction results (see Figure <ref>), while ChatGPT_CoT could provide additional natural language explanations for its prediction results in most cases (see Figure <ref>). The natural language explanations generated by ChatGPT_CoT not only enhance users' trust in the prediction results but also enables developers to obtain a better understanding of the knowledge deficiencies in ChatGPT. To gain a deep insight into the natural language explanations generated by ChatGPT_CoT, we categorize the nature language explanations into three types: (1) None: no explanation or refuse personality recognition; (2) Original Content: only the original text is provided as explanation; (3) Logical Reasoning: logical reasoning based on the original text. Figure <ref> shows the examples of three types of natural language explanations for the prediction of personality trait O, and Figure <ref> illustrates the distribution of three types of natural language explanations on different datasets and personality traits. As depicted in Figure <ref>, on both Essays and PAN datasets, ChatGPT_CoT provides more natural language explanations of the logical reasoning type for the prediction of personality trait O, while offering more natural language explanations of the original content type when identifying personality trait N. With regard to possible reasons, personality trait O reflects whether a person is creative/open-minded (with high level) or reflective/conventional (with low level) <cit.>, which may not be directly presented in person-generated text. Hence, the prediction of personality trait O requires ChatGPT to engage in more logical reasoning for a deeper analysis of given text. For example, as shown in Figure <ref>, based on given text, ChatGPT_CoT infers that the person's text is mostly focused on concrete details and experiences, with little indication of abstract or imaginative thinking. Therefore, ChatGPT_CoT predicts that the person has low O. On the contrary, personality trait N reflects whether a person is emotionally stable (with low level) or emotionally unstable (with high level) <cit.>. Since individuals normally directly express their negative emotions (e.g., anxiety) in their texts, it is relatively easier for ChatGPT_CoT to predict personality trait N based on the original text without logical reasoning. For example, one of natural language explanation of the original content type generated by ChatGPT_CoT for predicting personality trait N is mentions feeling stressed, tense, and worried about health problems and homework overload. Furthermore, as demonstrated in Figure <ref>, compared with Essays dataset, ChatGPT_CoT provides relatively more natural language explanations of the logical reasoning type for personality recognition on PAN dataset. The possible reason is that Essays dataset consists of stream-of-consciousness essays written by psychology students under professional guidance, while PAN dataset is composed of tweets written freely by various internet users. Hence, compared with the texts in Essays dataset, the texts in PAN datasets generally contain relatively less valuable information, which increases the difficulty of text-based personality prediction on PAN dataset. Therefore, compared to Essays dataset, ChatGPT_CoT needs to perform more logical reasoning to accomplish personality recognition task accurately on PAN dataset.
Results of one-shot prompting. From Table <ref> and Table <ref>, it can be observed that by providing a demonstration example, ChatGPT's performance has improved on Essays dataset but largely declined on PAN dataset. To be specific, ChatGPT_OS increases its average classification accuracy from 57.4% to 58.2% on Essays dataset when compared with ChatGPT_ZS. However, relative to ChatGPT_ZS, ChatGPT_OS decreases its average classification accuracy from 57.3% to 49.3% on PAN dataset. Regarding possible reasons, on the one hand, as mentioned above, the texts in Essays dataset generally contain more valuable information when compared to PAN dataset. Hence, there is a higher probability of selecting samples containing more invalid information from PAN dataset than from Essays dataset, thereby affecting ChatGPT_OS's learning of the relationship between text and Big-Five personality on PAN dataset. On the other hand, the persons in Essays dataset are all psychology students, while the persons in PAN dataset are various internet users from different age groups (from 18 years old to over 50 years old). Hence, without the corresponding demographic attributes (e.g., age) provided, the demonstration example selected from the training set of PAN dataset may not assist ChatGPT_OS in predicting the personalities of certain groups. For instance, if the demonstration example is generated by a young person, the association between text and personality that ChatGPT_OS learns from this demonstration example may not be helpful in predicting the personality of an old person.
Results of zero-shot level-oriented prompting. Table <ref> and Table <ref> demonstrate that guiding ChatGPT_CoT to analyze given text from specified level could help ChatGPT in analyzing given text more targeted and completing personality prediction task precisely. For example, by guiding ChatGPT_CoT_D to analyze given text from document level, its performance on Essays dataset can rival the performance of ChatGPT_OS (58.3% vs. 58.2% w.r.t. average classification accuracy). Similarly, on PAN dataset, when ChatGPT_CoT_S is guided to analyze given text from sentence level, its average classification accuracy has been a notable improvement when compared to ChatGPT_CoT, rising from 57.3% to 62.7%. We believe the possible reason is that the texts in Essays dataset were written within a limited time frame, making it more suitable for conducting overall analysis from document level. On the other hand, the texts in PAN dataset are composed of tweets posted at different times. Hence, it is more appropriate to analyze given text in PAN dataset from sentence level, which is helpful to mine diverse individual information reflected in different tweets. This discovery not only helps optimize existing promptings for text analysis but also offers new insights into eliciting various abilities of LLMs in a fine-grained manner.
§.§ Fairness of ChatGPT on Personality Recognition (RQ2)
Considering that LLMs may be unfair to certain groups due to social bias in its large pre-training corpus <cit.>, we further investigate the fairness of ChatGPT on personality prediction task across different groups. To be specific, we adopt ChatGPT_CoT with different demographic attributes for personality prediction on PAN dataset, as PAN dataset provides various demographic attributes, including gender and age (see Table <ref>). Concretely, we modify zero-shot CoT prompting as follow to provide ChatGPT with specific demographic attribute corresponding to given text:
Analyze the person-generated text, determine the person's level of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism. Only return Low or High. Note that, the person is [Corresponding Attribute].
Text: "[Text]"
Level: Let's think step by step:
Please refer to Table <ref> for the setting of [Corresponding Attribute]. For example, [Corresponding Attribute] is set to aged between 18 and 24 when the age of the corresponding person is between 18 and 24 years old. To be specific, ChatGPT_CoT_gender and ChatGPT_CoT_age represent ChatGPT with the modified zero-shot CoT promptings, which incorporates gender and age information respectively.
It is apparent from Figure <ref> that the incorporation of demographic attributes impairs the personality prediction ability of ChatGPT_CoT to some extent, especially the integration of age information. For example, relative to ChatGPT_CoT, ChatGPT_CoT_gender and ChatGPT_CoT_age decrease their average accuracy from 55.5% to 55.2% and 54.0% respectively. We speculate that this phenomenon may be due to ChatGPT's biases towards certain groups, which leads to unfair treatment of those groups. In order to better observe ChatGPT's biases on personality prediction task, we first obtain the prediction results of ChatGPT_CoT, ChatGPT_CoT_gender, and ChatGPT_CoT_age towards different groups. We then visualize the proportion of low and high levels in those prediction results. Concretely, Figure <ref> and Figure <ref> show the distribution of the prediction results of ChatGPT_CoT and ChatGPT_CoT_gender towards woman and man groups respectively. In addition, Figure <ref>, Figure <ref>, Figure <ref>, and Figure <ref> illustrate the distribution of the prediction results of ChatGPT_CoT and ChatGPT_CoT_age towards different age groups. Take Figure <ref> as an example, the figure represents that among the 174 women in PAN dataset, 51% of them have high O (i.e., ground truth). However, ChatGPT_CoT classifies 74.8% of the 174 women as high O, while ChatGPT_CoT_gender classifies 82.3% of the 174 women as high O. In contrast, as shown in Figure <ref>, among the 174 men in PAN dataset, 47.6% of them have low O (i.e., ground truth). However, ChatGPT_CoT classifies 29.9% of the 174 men as low O, while ChatGPT_CoT_gender classifies 32.0% of the 174 men as low O. In summary, after adding gender information, ChatGPT_CoT_gender classifies more women as high O and classifies more men as low O. This phenomenon suggests that ChatGPT considers women to be more likely to belong to high O when compared to men. In order to make a more intuitive comparison of the prediction results of ChatGPT_CoT, ChatGPT_CoT_gender, and ChatGPT_CoT_age towards different groups, we further visualize the changes of the proportion of high level in the prediction results of ChatGPT_CoT_gender/ ChatGPT_CoT_age relative to ChatGPT_CoT (see Figure <ref>). For example, as displayed in Figure <ref>, for 174 women in PAN dataset, the proportion of women with high A in the prediction results of ChatGPT_CoT_gender has increased by 8.1% when compared to ChatGPT_CoT. Based on Figure <ref>, the biases of ChatGPT towards certain groups can be summarized as follows:
(1) Relative to the man group, the woman group is more likely to exhibit high levels of personality traits O, C, and A.
(2) The older an individual is, the greater the likelihood of her/his personality traits O being low level.
However, these findings are not entirely consistent with existing research. For example, some studies suggest that the woman group is more likely to exhibit high levels of personality traits A and N compared to the man group, whereas gender differences in the other personality traits (i.e., O, C, and E) have been either inconsistent or of negligible magnitude <cit.>. Possible reasons for this could be that, on the one hand, ChatGPT's biases are influenced by the biases of the annotators, which may not be representative. On the other hand, these findings are discovered based solely on the PAN dataset, limiting their generalization to some extent. Nevertheless, this phenomenon serves as a cautionary reminder for researchers to consider fairness when utilizing ChatGPT for personality prediction.
§.§ ChatGPT's Personality Recognition Ability on Downstream Task (RQ3)
We apply the personality data generated by ChatGPT to other downstream tasks for validating the effectiveness of ChatGPT's personality recognition ability. Concretely, we choose sentiment classification task and stress prediction task as the downstream tasks, because existing psychological research indicates that there is a correlation between Big-Five personality and sentiment expression <cit.> as well as stress vulnerability <cit.>. For each task, to make a more comprehensive assessment of the impact of personality data generated by ChatGPT, we first adopt ChatGPT_CoT and fine-tuned RoBERTa to generate the corresponding Big-Five personality based on given text respectively. We then use a basic prompting to elicit the task-related ability (i.e., sentiment classification ability and stress prediction ability) of ChatGPT. Finally, we modify the basic prompting by incorporating different Big-Five personalities and observe the task-related ability of ChatGPT with different modified basic promptings.
To be specific, for sentiment classification task, we adopt a subset of Yelp-2 dataset <cit.> for conducting experiments. The reason for not utilizing the complete Yelp-2 dataset is to take into account the cost of using ChatGPT's API. Concretely, we randomly select 500 positive samples and 500 negative samples from the testing set of Yelp-2 dataset to construct the subset. While for stress prediction task, we choose Dreaddit dataset, which consists of 715 samples (369 positive samples and 346 negative samples) in its testing set. Specifically, considering that the texts in the PAN dataset, Yelp-2 dataset, and Stress dataset are all web posts, we use fine-tuned RoBERTa trained on PAN dataset to generate personality data. Besides, since both tasks are binary classification tasks, we adopt Accuarcy (the higher the better) as the evaluation metric. In addition, the basic promptings used for sentiment classification task and stress prediction task are proposed by <cit.> and <cit.>. Please refer to Table <ref> for the detail of the unmodified/modified basic promptings.
The experimental results are illustrated in Figure <ref>. Note that, ChatGPT_basic represents ChatGPT with the basic prompting, while ChatGPT_basic_PC and ChatGPT_basic_PR denotes ChatGPT with the modified basic promptings, which incorporates the personality data generated by ChatGPT_CoT and fine-tuned RoBERTa respectively. It can be observed that after incorporating the personality data predicted by ChatGPT_CoT, there is an improvement in ChatGPT's performance on both sentiment classification task and stress prediction task. For example, ChatGPT_basic_PC increases its classification accuracy from 96.6% to 97.6% on sentiment classification task when compared to ChatGPT_basic. While for stress prediction task, ChatGPT_basic_PC increases its classification accuracy from 71.3% to 73.0% when compared to ChatGPT_basic. This proves the effectiveness of the personality data generated by ChatGPT_CoT. With an understanding of individuals' Big-Five personalities, ChatGPT can analyze their sentiment expression and stress condition in a more personalized manner. Another interesting finding is that the personality data generated by fine-tuned RoBERTa can help improve the performance of ChatGPT in sentiment classification tasks, but it actually decreases ChatGPT's performance in stress prediction task. We believe that the possible reason for this is that fine-tuned RoBERTa trained on PAN dataset does not generalize well, which results in the poor performance of personality prediction on Dreaddit dataset. In contrast, ChatGPT relies solely on zero-shot CoT prompting to elicit its personality prediction ability and does not require training data, thus exhibiting stronger generalization performance on different datasets.
§ CONCLUSION AND FUTURE DIRECTIONS
In this work, we evaluate the personality recognition ability of ChatGPT with different prompting strategies, and compare its performance with RNN, fine-tuned RoBERTa, and corresponding SOTA model on two representative text-based personality identification datasets. With the elicitation of zero-shot CoT prompting, ChatGPT exhibits impressive personality recognition ability and has strong interpretability for its prediction results. In addition, we find that guiding ChatGPT to analyze text at a specified level helps improve its ability to predict personality, which proves the effectiveness of level-oriented prompting strategy. Moreover, we discover that ChatGPT exhibits unfairness to some sensitive demographic attributes, leading to unfair treatment of some specific groups when predicting personality. Besides, we apply the personality data inferred by ChatGPT in other downstream tasks and achieve performance improvement to some extent. This proves that ChatGPT's personality prediction ability is effective and has high generalization performance.
As for future work, on the one hand, we would like to apply level-oriented prompting strategy to more NLP tasks for observing its effectiveness in mining text information. On the other hand, with the continuous emergence of various LLMs, we are interested in exploring the construction of domain-specific LLMs based on psychological data in order to enhance the personality recognition ability of LLMs.
§ ACKNOWLEDGMENT
This work is funded by Science and Technology Commission of Shanghai Municipality, China (under project No. 21511100302), National Natural Science Foundation of China (under project No. 61907016), Natural Science Foundation of Shanghai (under project No. 22ZR1419000), the Research Project of Changning District Science and Technology Committee (under project No. CNKW2022Y37), and the Medical Master's and Doctoral Innovation Talent Base Project of Changning District (under project No. RCJD2022S07). In addition, it is also supported by The Research Project of Shanghai Science and Technology Commission (20dz2260300) and The Fundamental Research Funds for the Central Universities.
§.§ CRediT Authorship Contribution Statement
Yu Ji: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Data Curation, Writing-Original Draft, Writing-Review and Editing. Wen Wu: Conceptualization, Methodology, Formal analysis, Investigation, Writing-Original Draft, Writing-Review and Editing, Supervision. Hong Zheng: Writing-Review and Editing. Yi Hu: Supervision, Writing-Review and Editing. Xi Chen: Writing-Review and Editing. Liang He: Supervision, Writing-Review and Editing.
§.§ Ethical Approval
Not applicable.
§.§ Data Availability
Data will be made available on request.
§.§ Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
unsrt
|
http://arxiv.org/abs/2307.05297v1 | 20230711144224 | Ab initio Self-consistent GW Calculations in Non-Equilibrium Devices: Auger Recombination and Electron-Electron Scattering | [
"Leonard Deuschle",
"Jonathan Backman",
"Mathieu Luisier",
"Jiang Cao"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"quant-ph"
] |
Ab initio Self-consistent GW Calculations in Non-Equilibrium Devices: Auger Recombination and Electron-Electron Scattering
Leonard Deuschle1,
Jonathan Backman, Mathieu Luisier, and
Jiang Cao
Integrated Systems Laboratory,
ETH Zurich,
Switzerland
Corresponding Author: [email protected]
August 12, 2023
==============================================================================================================================================================================
We present first-principles quantum transport simulations of single-walled carbon nanotubes based on the NEGF method and including carrier-carrier interactions within the self-consistent GW approximation. Motivated by the characteristic enhancement of interaction between charge carriers in one-dimensional systems, we show that the developed framework can predict Auger recombination, hot carrier relaxation, and impact ionization in this type of nanostructures. Using the computed scattering rates, we infer the inverse electron-hole pair lifetimes for different Auger processes in several device configurations.
CNT, Auger recombination, NEGF, GW
§ INTRODUCTION
Experimental investigations on semiconducting single-walled carbon nanotubes (SWCNs) have revealed the strong role of electron-electron (e-e) interactions in these devices <cit.>. First, the measured bandgap is greater than the one predicted by Density Functional Theory (DFT). Second, non-radiative Auger recombination <cit.> and avalanche-type processes <cit.> are observed, which are clear signatures that strong e-e interactions may occur. These findings have led to theoretical studies of SWCNs that go beyond one-electron models <cit.>. However, a complete ab initio treatment of the e-e interactions in non-equilibrium quantum transport simulations remains missing. Self-consistently treating e-e scattering within the Non-equilibrium Green's function (NEGF) formalism and the GW approximation (scGW) is indeed a formidable challenge. It has only been successfully addressed in small molecules due to the high computational burden associated with such calculations <cit.>. In this work, we present a fully ab initio scGW simulation study of (8,0)-SWCN devices. We demonstrate that our method naturally accounts for Auger processes under different doping and bias configurations of the SWCN as well as under non-equilibrium conditions. To the best of our knowledge this is the first ab initio study of e-e scattering in non-equilibrium devices.
§ METHOD
§.§ Hamiltonian Generation
We first perform a DFT calculation of the (8,0)-SWCN unit cell with VASP <cit.> using GGA-PBE pseudo-potentials,
assuming that the nanotube transport direction is periodic. We then use wannier90 <cit.> to transform the plane-wave basis into a set of maximally localized Wannier functions (MLWF) and upscale the Hamiltonian to the size of the device of interest. We created a SWCN with a diameter of 6.29 Å, a length of roughly 12 nm, and made of 896 atoms with one single orbital per atom. (Fig. <ref>a-b).
§.§ Non-Equilibrium scGW
The following equation must be solved to obtain the retarded Green's function G^R
(E-H-Σ^R_B(E)- Σ^R_GW(E)) · G^R(E) = I,
where E is the energy vector, H is the Hamiltonian matrix and Σ^R_B and Σ^R_GW are the retarded boundary and GW self-energies, respectively. I denotes the identity matrix.
The lesser and greater Green's function G^≶ are given by
G^≶(E) = G^R(E)·Σ^≶(E) ·G^R^†(E).
Here, Σ^≶ denotes the sum of the lesser and greater boundary and GW self-energies.
The retarded, lesser, and greater GW self-energies are given by
Σ^≶_GW,ij = i ∫ dE' G^≶_ij(E')W^≶_ij(E-E') ,
Σ_GW,ij^R= i∫ dE' G^R_ij(E')W^<_ij(E-E')
+ G^<_ij(E')W^R_ij(E-E')
+ G^R_ij(E')W^R_ij(E-E'),
where W is the screened interaction. The self-energy matrix elements between orbital index i of an atom situated at position 𝐑_i and another one j at 𝐑_j in Eqs. (<ref>) and (<ref>) are found by performing the convolution in an element-wise fashion.
Note that all energy convolutions are explicitly calculated in the energy space, not through Fourier transform.
The retarded, lesser, and greater screened interactions are calculated using
W^R (E)= I - VP^R(E) ^-1V,
W^≶(E) = W^R(E)P^≶(E)W^R^†(E).
The equations for the components of the irreducible polarization P in the energy domain read
P^≶_ij = - i ∫ dE' G^≶_ij(E')G^≷_ji(E'-E) ,
P_ij^R = -i ∫ dE' G^R_ij(E')G^<_ji(E'-E)
+ G^<_ij(E')G^A_ji(E'-E).
To compute Eq. (<ref>) the bare Coulomb matrix elements are needed.
The real-space representation of the MLWF is used to calculate the Coulomb matrix elements V'_ij:
V'_ij = ∫∫ d𝐫d𝐫'|ϕ_i(𝐫)|^2|ϕ_j(𝐫')|^2/|𝐫 - 𝐫'|.
In Eq. (<ref>), the ϕ_i's denote a single MLWF in a real-space basis.
The approach outlined in <cit.> is used to first compute an electrostatic potential induced by one of the two wavefunctions and then integrating the potential weighted by the second wavefunction over space. The matrix must be upscaled to the device structure. Since the wavefunctions obey lattice periodicity, off-diagonal blocks can be computed by shifting one wavefunction into a different cell and then computing the resulting integral (Fig. <ref>c).
To correct for the truncation scheme used in the scGW, the Coulomb matrix is embedded in a dielectric environment V=V' / ϵ.
The ϵ is chosen such that the equilibrium nanotube bandgap matches the one obtained from a VASP DFT + G_0W_0 calculation.
To fulfill the conservation laws, the scGW is performed within the self-consistent Born approximation (SCBA) (Fig. <ref>d). Only the diagonal elements of the polarization and of the GW self-energy are kept in this study to minimize the computational burden. However, we have verified that the inclusion of off-diagonal elements does not qualitatively alter our results. Once the convergence of SCBA is reached, several device observables (i.e., local density-of-states (LDOS), charge densities, and spectral current) can be computed. Additionally, it was confirmed that current conservation is satisfied for the converged solution along the transport direction of the structure.
§.§ e-e Scattering Rates
In the NEGF formalism, the energy-resolved in- (ℛ_e-e^in) and out-scattering (ℛ_e-e^out) rates are computed with
ℛ_e-e^in (E) = 1/2πħtr{Σ^<(E)G^>(E)}
ℛ_e-e^out (E) = 1/2πħtr{Σ^>(E)G^<(E)}.
To infer the inverse annihilation lifetimes of excited electron-hole pairs in an investigated Auger process, all quantites in Eqs. (<ref>)-(<ref>) are integrated over the relevant energy window and over the spatial dimension over which the process occurs. They are then normalized by the number of available electron-hole pairs in the considered energy range.
§ RESULTS
A bandstructure comparison between DFT and MLWF is reported in Fig. <ref>. Excellent agreement is observed for the valence bands as well as the first 4 conduction bands, which were set as targets. Using ϵ=1.1, the bandgap as extracted from our scGW LDOS at equilibrium condition matches the one obtained with VASP G_0W_0 at 1.92 eV, as shown in Fig. <ref>. Figures <ref>b-c indicate that the increase in bandgap after scGW is accompanied by a change in the shape of the transmission function due to the GW self-energy. We then apply an external potential and drive the SWCN out-of-equilibrium. The Fermi levels are adjusted relative to the valence (VB) and conduction band (CB) edges to create PN-, NN- and PP-like structures. Figure <ref> reports the position- and energy-resolved current density for the SWCN at different biases and contact doping profiles. In the PN-like structure (Fig. <ref>), the electrical current is injected into the CB from the right contact. In the middle of the device carriers recombine with the available hole states in the valence band (Auger recombination), that come from the left contact. In the NN case (Fig. <ref>), high-energy electrons collide with low-energy electrons on the right side, exchanging energy during this process and giving rise to thermalization effects. Finally, in the PP case, depicted in Fig. <ref>, impact ionization can be observed due to the large bias applied. Electrons in the valence band release kinetic energy and excite other electrons to empty states in the conduction band.
The in- and out-scattering rates observed in the PN-like structure are further investigated in Figure <ref>. The red shaded area in the conduction band depicts all out-scattering electrons in the energy range. The blue shade above represents the in-scattered electrons inside the conduction band. Subtracting the blue area from the red yields the total number of transitions from conduction to valence band (Auger recombination). This number is then normalized by the total number of available electron-hole pairs.
The calculated Auger recombination (AR), impact ionization (II), and inverse electron-hole pair lifetimes from our simulations are summarized in Tab. <ref>.
The value in the AR case shows good agreement with time-resolved fluorescence measurements <cit.>.
§ CONCLUSIONS
We have shown that our recently developed ab initio scGW method can accurately model e-e interactions in SWCNs driven out-of-equilibrium. A very wide range of scenarios can be investigated, from Auger recombination to hot carrier relaxation and impact ionization. This method can be readily applied to any (quasi-)1D system, e.g., nanowires and nanoribbons. As the next step, off-diagonal elements will be added and the numerics will be improved to treat larger device structures.
§ ACKNOWLEDGMENT
This work was supported by the Swiss National Science Foundation (SNSF) under grant n^∘ 209358 (QuaTrEx). We acknowledge support from CSCS under project s1119. J.C. acknowledges funding from the European Union under the Marie Skłodowska-Curie grant No. 885893.
IEEEtran
|
http://arxiv.org/abs/2307.04844v1 | 20230710183121 | THE K-RING OF E_6/Spin(10) | [
"Sudeep Podder",
"Parameswaran Sankaran"
] | math.KT | [
"math.KT",
"55N15, 19L99"
] |
Inn
Hom
gcd
|
http://arxiv.org/abs/2307.07218v1 | 20230714082125 | Mega-TTS 2: Zero-Shot Text-to-Speech with Arbitrary Length Speech Prompts | [
"Ziyue Jiang",
"Jinglin Liu",
"Yi Ren",
"Jinzheng He",
"Chen Zhang",
"Zhenhui Ye",
"Pengfei Wei",
"Chunfeng Wang",
"Xiang Yin",
"Zejun Ma",
"Zhou Zhao"
] | eess.AS | [
"eess.AS",
"cs.SD"
] |
2426Communication29.9pc0.5pt1618
Three-Dimensional Fully Metallic Dual Polarization Frequency Selective Surface Design Using Coupled-Resonator Circuit Information
Shiri ChechikTel Aviv University, Israel, mailto:[email protected]@tauex.tau.ac.il
Shay MozesReichman University, Israel, mailto:[email protected]@idc.ac.il
Oren WeimannUniversity of Haifa, Israel, mailto:[email protected]@cs.haifa.ac.il
==================================================================================================================================================================================================================================================================================
Zero-shot text-to-speech aims at synthesizing voices with unseen speech prompts. Previous large-scale multispeaker TTS models have successfully achieved this goal with an enrolled recording within 10 seconds. However, most of them are designed to utilize only short speech prompts. The limited information in short speech prompts significantly hinders the performance of fine-grained identity imitation. In this paper, we introduce Mega-TTS 2, a generic zero-shot multispeaker TTS model that is capable of synthesizing speech for unseen speakers with arbitrary-length prompts. Specifically, we 1) design a multi-reference timbre encoder to extract timbre information from multiple reference speeches; 2) and train a prosody language model with arbitrary-length speech prompts; With these designs, our model is suitable for prompts of different lengths, which extends the upper bound of speech quality for zero-shot text-to-speech. Besides arbitrary-length prompts, we introduce arbitrary-source prompts, which leverages the probabilities derived from multiple P-LLM outputs to produce expressive and controlled prosody. Furthermore, we propose a phoneme-level auto-regressive duration model to introduce in-context learning capabilities to duration modeling. Experiments demonstrate that our method could not only synthesize identity-preserving speech with a short prompt of an unseen speaker but also achieve improved performance with longer speech prompts. Audio samples can be found in <https://mega-tts.github.io/mega2_demo/>.
§ INTRODUCTION
In recent years, there has been remarkable progress in the development of text-to-speech (TTS) technology <cit.>. With the advancement in this field, advanced TTS systems are capable of synthesizing high-quality voices of single or multiple speakers <cit.>. However, the performance of these systems relies heavily on the quality and quantity of the data utilized during the training and fine-tuning phases. To reduce such a reliance, existing works leverage large-scale generative models trained on datasets at the scale of tens of thousands of hours to perform zero-shot TTS <cit.>. These powerful models can effectively synthesize speech given only a single speech prompt, eliminating the need for extensive data preparation.
However, previous works of zero-shot TTS are typically designed for short speech prompts. The information in the short speech prompt is insufficient to guide the zero-shot TTS systems to imitate the speaking style of a natural person perfectly. As shown in Table <ref>, the speaking style consists of various elements such as identity, pronunciation, and prosody, each requiring a different degree of prompt information. Given the limited information contained in short speech prompts, the ability to accurately mimic a speaker's speaking style is hindered. Motivated by this observation, we aim to improve current zero-shot text-to-speech models to support arbitrary-length speech prompts. Given an arbitrary-length speech clip as the prompt, our objective is to extract as much information as possible.
This paper introduces Mega-TTS 2, a zero-shot multi-speaker TTS system capable of handling arbitrary-length speech prompts. We begin by disentangling the speech into content, timbre, and prosody information. With these representations, we make the following contributions:
* To extend the supported prompt length, we train a language model that generates compressed discrete prosody codes auto-regressively with arbitrary-length speech prompts.
* To capture fine-grained timbre information, we design a multi-reference timbre encoder, which extracts timbre information from multiple reference speeches.
* To further improve the naturalness of duration modeling, we propose a phoneme-level auto-regressive duration model. By incorporating in-context learning capabilities into duration modeling, we enhance the overall naturalness of the generated speech.
* In addition to the arbitrary-length prompts, we propose the arbitrary-source prompts technology. Specifically, it is possible to generate prosody codes by utilizing prosody prompts from other speakers while maintaining the target speaker's timbre. By interpolating the probabilities from multiple prosody language models' outputs, expressive speech can be generated in a controlled manner.
To evaluate the zero-shot performance of Mega-TTS 2, we conduct experiments on LibriSpeech test-clean <cit.> datasets. Our results demonstrate that Mega-TTS 2 outperforms state-of-the-art zero-shot TTS systems <cit.> in terms of speaker similarity and speech naturalness when utilizing a one-sentence speech prompt. Notably, when the length of the prompt is further extended, the performance of our model is significantly improved. These findings highlight the potential benefits of incorporating longer speech prompts in zero-shot TTS systems.
§ BACKGROUND
In this section, we briefly introduce the application of in-context learning (ICL) in text-to-speech (TTS).
Recently, large language models <cit.> have demonstrated the ability to learn from a few prompts in the context, which is known as in-context learning <cit.>. Inspired by this, many methods <cit.> are proposed to use in-context learning in TTS and have achieved remarkable results in zero-shot TTS. In these methods, the text prompt often provides the content of the audio to be synthesized, while the acoustic prompt provides information about the timbre and prosody.
VALL-E <cit.> proposes the first neural codec language model for text-to-speech, exhibiting strong in-context learning for zero-shot speech generation. VALL-E X<cit.> extends VALL-E by introducing language ID to support cross-lingual TTS and speech-to-speech translation. VIOLA <cit.> further extends VALL-E to all speech-processing tasks, including speech recognition, synthesis, and translation. NeturalSpeech 2 <cit.> uses continuous
vectors instead of discrete neural codec tokens and introduces in-context learning to latent diffusion model <cit.>. More recently, Mega-TTS <cit.> decomposes speech into several attributes and models each of them with appropriate inductive biases and in-context learning.
Although these methods have achieved promising performance, they generally use a single short acoustic prompt to provide prosody and timbre information, which hinders the model from obtaining more fine-grained prosody and timbre information from the acoustic prompt. In this work, we train a prosody language model with arbitrary-length speech prompts and propose an auto-regressive duration model with in-context learning to improve the prosody modeling. Furthermore, we design a multi-reference timbre encoder to extract more fine-grained timbre from speech prompts.
§ METHOD
In this section, we introduce the architecture of Mega-TTS 2, which is designed to synthesize speech for unseen speakers using arbitrary-length prompts. The overall architecture of our system is shown in Figure <ref>. To begin with, we adopt the VQ-GAN based TTS architecture as the foundation of our system, which consists of a content encoder, a timbre encoder, and a VQ prosody encoder following <cit.>. Next, in order to support in-context learning with arbitrary-length speech prompts, we introduce a prosody information language model (PLM) that can utilize arbitrary-length speech prompts and a multi-reference timbre encoder (MRTE) to extract fine-grained timbre information from multiple reference speeches. We further propose a phoneme-level auto-regressive duration model (ADM), which enhances the duration modeling by incorporating in-context learning capabilities. Additionally, we propose a prosody interpolation technique to improve the expressiveness of speech from the speaker who has a relatively flat speaking tone. In the subsequent subsections, we will provide detailed explanations of the design and training process of these components.
§.§ Utilizing Prompts of Arbitrary Length
Multi-reference timbre encoder. Our objective is to extract as much timbre information from arbitrary-length speech prompts. Previous zero-shot TTS models introduce the timbre encoder with temporal average pooling before the output layer (e.g., Meta-stylespeech <cit.>, Grad-stylespeech <cit.>, and Mega-TTS <cit.>). However, these approaches assume that timbre is a global representation and ignore other useful time-variant timbre information. For instance, speakers can change the timbre by using different speaking techniques when they aim to express specific semantic meanings[The perception of timbre, including the frequency spectrum detail and envelope, is determined by the physical characteristics of sound <cit.>.]. To address this issue, we introduce a multi-reference timbre encoder (MRTE) that captures fine-grained timbre information from multiple reference speeches. As shown in Figure <ref>, our MRTE consists of a mel encoder, a mel-to-phoneme attention module, and a length regulator <cit.>. First, we encode the mel-spectrogram into acoustic hidden states H_mel. Subsequently, to extract semantically relevant timbre information from speech prompts, we introduce a mel-to-phoneme attention module. This module takes the phoneme-level content hidden states H_content as the query, and H_mel as both the key and the value. We also use a global timbre encoder (GE) to extract the time-invariant timbre information and concatenate it to the phoneme-level hidden states. Finally, we expand the phoneme-level hidden states to match the length of the target mel-spectrogram using the length regulator. The output spectrogram-level hidden states H_CT contain the content and fine-grained timbre information.
Training PLM with arbitrary-length prompts.
Unlike previous models that are trained with 3 to 10 seconds prompts, our model aims at capturing the speaker's prosody habits from prompts of any length effectively. During the training process, we concatenate all sentences from the same speaker along the time axis. In each batch, the maximum number of mel-spectrogram frames is set to 32,000. If the total number of speech frames from a single speaker is less than 32,000, we will include speech samples from other speakers in this batch and incorporate corresponding attention masks into PLM. We did not specifically define the speech prompt; instead, we train the language model directly using the concatenated speech samples through the teacher-forcing technique with the cross-entropy loss. The PLM is a decoder-only transformer-based architecture, which uses prosody codes 𝐮 from speech prompts and H_CT as the condition. The autoregressive prosody prediction process of PLM can be formulated as:
p(𝐮| H_CT; θ)=∏_t=0^T p(𝐮_t|𝐮_<t, H_CT; θ),
where θ is the parameter of our PLM. This training technique enables the model to capture the useful prosody-level information contained in the arbitrary-length prompt. Therefore, in the inference stage, users can flexibly improve the generation quality by enlarging the prompt length without further fine-tuning steps.
§.§ Auto-Regressive Duration Model
Duration modeling has long been an important topic in speech synthesis. Previous models like FastSpeech 2 <cit.> train a non-autoregressive duration model to produce duration information and use a length regulator to expand phoneme sequence to the length of mel-spectrogram. Considering the duration information in prompts for zero-shot TTS, we propose a phoneme-level auto-regressive duration model (ADM) to enhance the duration modeling by incorporating strong in-context learning capabilities of auto-regressive models. The overall architecture of ADM keeps the same as PLM, but we use mean squared error (MSE) loss instead.
§.§ Prosody Interpolation
[11]r14.5em
< g r a p h i c s >
The illustration of prosody interpolation.
In the practical application of zero-shot TTS, we want to explore a new setting to control or replace the prosodic style of the target speaker while ensuring good adaptation to his timbre. Under this setting, we can not only be faithful to the prosodic style of the target speaker but also weigh faithfulness and diversity/expressiveness. It can be achieved by interpolating the probabilities from multiple PLM outputs, which come from multi-speakers. [Please note that although the PLM here absorbs multi-speaker voices, our timbre encoder only absorbs speech(es) from one target speaker.]
For example, if our target speaker has a relatively flat speaking tone, but we want to generate rhythmic prosody with
his timbre. The solution is: 1) first extract rhythmic prosody latent 𝐮_rhy from expressive speeches of other speakers and the flat prosody latent 𝐮_flat from the speech prompt. Then, 2) utilize two language models to separately decode the target speech with the prosodic prompt 𝐮_rhy and 𝐮_flat. These language models share the same parameters. In every step of the decoding process, the probability distributions of the two language models are interpolated with the weight γ. The prosody interpolation procedure can be formulated as:
p̃(𝐮) = ∏_t=0^T ( γ· p(𝐮_t|𝐮_<t, 𝐮_flat) + (1-γ) · p(𝐮_t|𝐮_<t, 𝐮_rhy) ),
With our prosody interpolation technique, users can freely enhance the expressiveness of the generated speech or choose to preserve the original rhythm.
§ EXPERIMENTS
§.§ Experimental setup
Training datasets. The datasets used to train Mega-TTS 2 are listed in Table <ref>. We use 38k hours of multi-domain speeches in English and Chinese in total. To balance the English and Chinese corpora, we only use the first 10M speech clips in the LibriLight dataset. We follow VALL-E <cit.> to obtain the transcriptions of the LibriLight dataset. Besides, since the speech in GigaSpeech and WenetSpeech does not have speaker identities and multiple speakers may appear in a speech clip, we process the datasets with an open-source automatic speaker diarization model[<https://huggingface.co/pyannote/speaker-diarization>] <cit.>. We also extract the phoneme-level alignments with the external alignment tool[<https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner>].
Model configuration.
Our Mega-TTS 2 consists of three encoders, a prosody large language model, a mel decoder, and a discriminator. The prosody encoder, timbre encoder, and mel generator consist of 5 convolutional blocks with 512 hidden size and 5 convolution 1D kernel size. The content encoder is an 8-layer Transformer <cit.> with 2 attention heads, 512 embedding dimensions, 1024 1D convolution filter size, and 5 convolution 1D kernel size. The GAN discriminator follows the architecture proposed in <cit.>. The PLM model is a decoder-only architecture that contains 24 Transformer layers, 2048 hidden size, and 1 convolution 1D kernel size, which has 1.2B parameters. The duration predictor is an 8-layer decoder-only Transformer model with 512 hidden size.
Training and inference.
In the first training stage, we train the VQ-GAN TTS model on 8 NVIDIA A100 GPUs, with a batch size of 48 sentences on each GPU. And in the second stage, we train the PLM and ADM on 8 NVIDIA A100 GPUs, with a batch size of 5,000 phoneme tokens on each GPU. We use the Adam optimizer with β_1 = 0.9, β_2 = 0.98, ϵ = 10^-9 and follow the same learning rate schedule in <cit.>. It takes 600k steps for the VQ-GAN TTS model's training and 300K steps for the PLM and ADM's training until convergence. The predicted mel-spectrograms are transformed into audio samples using pre-trained HiFi-GAN V1[<https://github.com/jik876/hifi-gan>] <cit.>. In the inference stage, we use the top-1 sampling scheme to sample the result speeches.
Objective metrics. We evaluate the word error rate (WER) and speaker similarity (SIM) for zero-shot TTS. In terms of the cosine speaker similarity, we use the WavLM model <cit.> finetuned for speaker verification[<https://huggingface.co/microsoft/wavlm-base-plus-sv>] to compute the cosine speaker similarity score between the ground-truth speech and the synthesized speech. The similarity score is in the range of [-1,1], where a larger value indicates a higher similarity of input samples. We also evaluate the word error rate (WER) for cross-lingual TTS. We use the ASR system from the released HuBERT-Large model <cit.> to transcribe the generated speech into text. Then, the WER between the transcribed text and the original target text is measured. We use all samples in the test set for the objective evaluation.
Subjective metrics. We conduct the MOS (mean opinion score) evaluation on the test set to measure the audio naturalness via Amazon Mechanical Turk. We keep the text content and prompt speech consistent among different models to exclude other interference factors. We randomly choose 50 samples from the test set of each dataset for the subjective evaluation and each audio is listened to by at least 20 testers. We analyze the MOS in three aspects: MOS-Q (Quality: clarity, high-frequency, and original timbre reconstruction), MOS-P (Prosody: naturalness of pitch, and duration), and MOS-S (Speaker similarity). We also analyze the CMOS in terms of audio quality and speech prosody. We tell the tester to focus on one corresponding aspect and ignore the other aspect when scoring.
§.§ Results of Zero-Shot Speech Synthesis
In this subsection, we evaluate our model with various lengths of speech prompts and compare our model with several zero-shot TTS baselines to demonstrate the effectiveness of expanding the length of speech prompts.
Baselines.
We compare the zero-shot speech synthesis performance of Mega-TTS 2 with three baseline systems, including: 1)YourTTS <cit.>, a powerful zero-shot TTS model trained on 1k hours of speech dataset. We use the official code and released checkpoint[<https://github.com/Edresson/YourTTS>]; 2) VALL-E <cit.>, a large-scale zero-shot TTS model using the audio codec model to generate discrete speech codes and LLM to generate them. For VALL-E, we directly download the first 16 utterances from the VALL-E demo page. The audio samples consist of 8 samples from LibriSpeech and 8 samples from VCTK[VALL-E does not release its code officially. The unofficial implementations and our implementation are deficient, which would make us difficult to fairly compare our system with VALL-E.]. 3) Mega-TTS<cit.>, a large-scale zero-shot TTS model that decomposes speech into several
attributes and models each of them with appropriate inductive biases and in-context learning. For each sample synthesis, we randomly choose another utterance of the same speaker and crop a 3-seconds speech segment as the speech prompt.
Analysis.
As shown in Table <ref>, when we only use 1 sentence as the prosody prompt and duration prompt, and 1 second of speech as the timbre prompt, our method achieves a higher speech quality but lower speaker similarity and prosody naturalness than Mega-TTS and VALL-E. However, when the length of speech prompt increases, the speaker similarity and prosody naturalness of Mega-TTS 2 significantly increase, proving the effectiveness of expanding the length of prompts.
§.§ Results of Ablation Studies
We conduct ablation studies to demonstrate the effectiveness of the designs in Mega-TTS 2, including the auto-regressive duration model (ADM) and the multi-reference timbre encoder (MRTE). In these experiments, all models use 50 sentences as the prompts for prosody and duration, and 10 seconds of speech as the prompt for timbre. The results are presented in Table <ref>. We can see that MOS-P drops when replacing our ADM with the duration predictor proposed in FastSpeech 2 <cit.>, indicating that the in-context learning capability of ADM can enhance the prosody modeling of zero-shot TTS systems. Additionally, when we substitute the MRTE with the speaker encoder proposed in Grad-stylespeech <cit.>, MOS-S drops to 3.93, demonstrating the effectiveness of the proposed MRTE.
§ CONCLUSIONS
In this paper, we present Mega-TTS 2, the first large-scale zero-shot TTS model that efficiently utilizes speech prompts of arbitrary lengths. Through our proposed prosody language model with arbitrary-length speech prompts, multi-reference timbre encoder, and duration model with in-context learning, Mega-TTS 2 not only achieves state-of-the-art performance with given short speech prompts but also produces better results with longer speech prompts. Furthermore, we also propose the arbitrary-source prompts algorithm to generate prosody codes by interpolating the
probabilities from multiple prosody language models’ outputs, while maintaining the target speaker’s timbre. For future work, we will explore more efficient models such as RVQ-VAE <cit.> to enhance the reconstruction quality and attempt to disentangle the acoustic environments or background noises in speech prompts. We will also expand the dataset size to 1,000k hours of speech to enable more powerful zero-shot capability.
iclr2022_conference
|
http://arxiv.org/abs/2307.04027v1 | 20230708182951 | Slow-roll inflation and growth of perturbations in Kaniadakis Cosmology | [
"Gaetano Lambiase",
"Giuseppe Gaetano Luciano",
"Ahmad Sheykhi"
] | gr-qc | [
"gr-qc",
"hep-ph",
"hep-th"
] | |
http://arxiv.org/abs/2307.04654v1 | 20230710155406 | Poles and zeros of electromagnetic quantities in photonic systems | [
"Felix Binkowski",
"Fridtjof Betz",
"Rémi Colom",
"Patrice Genevet",
"Sven Burger"
] | physics.optics | [
"physics.optics",
"cond-mat.mes-hall",
"physics.comp-ph"
] |
Université Côte d’Azur, CNRS, CRHEA, 06560 Valbonne, France
Université Côte d’Azur, CNRS, CRHEA, 06560 Valbonne, France
Physics Department, Colorado School of Mines, Golden, Colorado 80401, USA
We present an approach to investigate poles and zeros in resonant photonic systems.
The theory is based on contour integration of electromagnetic quantities
and allows to compute the zeros, to extract their sensitivities with respect to geometrical or other parameters,
and to perform modal expansions in the complex frequency plane.
The approach is demonstrated using an example from the field of nanophotonics, an illuminated
metasurface, where the emergence of reflection zeros due to the underlying resonance poles is explored.
Poles and zeros of electromagnetic quantities in photonic systems
Sven Burger
August 12, 2023
=================================================================
In the field of photonics, light-matter interactions can be tuned by exploiting resonance phenomena.
Examples include tailoring quantum entanglement with atoms and photons in cavities <cit.>,
probing single molecules with ultrahigh sensitivity <cit.>, and
realizing efficient single-photon sources <cit.>.
While electromagnetic observables are measured at real-valued excitation frequencies,
the concept of resonances intrinsically considers
the complex frequency plane <cit.>.
Resonance frequencies are complex-valued as the systems exhibit losses, e.g.,
due to interaction with the environment.
Excitation of the systems close to the resonance frequencies,
which are the poles of the electromagnetic field, leads to highly increased field values.
An important figure of merit is the quality factor of a resonance,
which is defined as the scaled ratio of real and imaginary part of the corresponding pole,
and which can represent the relation between stored
and dissipated electromagnetic field energy of the resonance <cit.>.
Resonances can also serve as a basis for the expansion of electromagnetic quantities.
Although most nanophotonic systems support many resonances, often only a few
resonances are sufficient to determine the optical
response in the real-valued frequency range of interest <cit.>.
The design of photonic components
can be greatly simplified by determining the complex frequency response of the photonic systems.
Controlling the relative locations of complex-valued poles and zeros of the scattering
matrix or of the transmission or reflection coefficients
can serve as a basis to tailor the corresponding optical response.
This kind of approach has long been used to design electronic systems <cit.>.
For example, all pass filters, i.e., systems whose response amplitude remains
constant when the excitation frequency is varied, have poles
and zeros which are complex conjugated <cit.>.
Other examples are minimum-phase systems, where the zeros have to be restricted
to the lower part of the complex plane <cit.>.
In photonics, recently,
it has been shown that a 2π-phase gradient of the reflection or
transmission output channel of a metasurface can be realized when a pair of pole and zero
is separated by the real axis in the complex frequency plane <cit.>.
Moreover, the zeros can have arbitrarily small imaginary parts,
i.e., the analysis of the locations of the zeros is extremely relevant
to design the response of the photonic systems at real frequencies.
Total absorption of light or perfect coherent absorption occurs when zeros of the
scattering matrix are on the real axis <cit.>.
Reflection zeros are also exploited for phase-sensitive detection with
nanophotonic cavities in biosensing applications <cit.>.
While in many electronic systems the determination of poles and zeros of the transfer matrix
may be done analytically, this is often not possible for photonic structures, such as metasurfaces.
To compute reflection and transmission zeros of scattering matrices of specific systems,
it has been proposed to solve Maxwell's equations as an eigenproblem with
appropriately modified boundary conditions <cit.>.
In this work, we present an approach for the study of poles and zeros in arbitrary photonic systems.
The theory is based on contour integration of electromagnetic quantities
which allows to extract also sensitivities of the poles and zeros,
i.e., their evolution in the complex frequency plane depending on chosen parameters can be analyzed.
A numerical realization is used to demonstrate the approach.
The poles and the reflection zeros of a
metasurface and their sensitivities with respect to geometrical parameters are computed.
Furthermore, a modal expansion in the complex frequency plane is introduced to investigate
the appearance of the reflection zeros through the interference of modal contributions.
Singularities and contour integration.—In the steady-state regime,
light scattering in a material system can be described
by the time-harmonic Maxwell's equation in second-order form,
∇×μ^-1 ∇×𝐄 -
ω_0^2ϵ𝐄 =
iω_0𝐉,
where 𝐄(𝐫,ω_0) ∈ℂ^3 is the electric field,
𝐉(𝐫)∈ℂ^3 is
the electric current density describing a light source,
ϵ(𝐫,ω_0) and μ(𝐫,ω_0) are the
permittivity and permeability tensors, respectively,
𝐫∈ℝ^3 is the position, and ω_0∈ℝ is the angular frequency.
Electromagnetic quantities Q(𝐄(𝐫,ω_0)) ∈ℂ, such as
reflection or transmission coefficients,
are typically experimentally measured
for real excitation frequencies .
However,
to obtain deeper insights into light-matter interactions,
an investigation of the optical response for complex
frequencies ω∈ℂ is essential.
For this, we consider the analytical continuation of
Q(𝐄(𝐫,ω_0)) into the complex frequency plane,
which we denote by q(ω) ∈ℂ as a short notation of q(𝐄(𝐫,ω)).
Figure <ref>(a) shows an example from the field of nanophotonics,
a dielectric metasurface <cit.>. Illumination of the metasurface by a plane wave
with the optical frequency ω_0 yields a physical observable Q(ω_0).
The singularities of its analytical continuation q(ω)
and the singularities of q(ω)^-1 are of special interest and
can be used to investigate the properties of the metasurface.
The singularities of q(ω) are the
poles ω^pole of the physical quantity q(ω).
The associated electric fields are so-called resonances or quasinormal modes.
The singularities of q(ω)^-1 are the zeros ω^zero of q(ω).
The associated electric fields leads to q(ω^zero) = 0.
Figure <ref>(b) sketches the complex frequency plane with exemplary locations of a pole and a zero.
By using Cauchy's integral theorem for a contour C which
encloses one simple pole ω^pole
and (or) one simple zero ω^zero of the quantity q(ω), as sketched in Fig. <ref>(b),
ω^pole and ω^zero are given by
ω^pole = ∮_Cω q(ω)
dω/∮_Cq(ω)
dωandω^zero = ∮_Cω q(ω)^-1
dω/∮_Cq(ω)^-1
dω,
respectively.
The locations of M poles ω^pole_m
inside a contour C are given by the eigenvalues ω_m of the
generalized eigenproblem <cit.>
H^< X = H X Ω,
where Ω = diag(ω_1,…,ω_M)
is a diagonal matrix containing the eigenvalues, the columns of the
matrix X ∈ℂ^M× M are the eigenvectors, and
H =
[ s_0 … s_M-1; ⋮ ⋮; s_M-1 … s_2M-2 ],
H^< =
[ s_1 … s_M; ⋮ ⋮; s_M … s_2M-1 ]
are Hankel matrices with the contour-integral-based coefficients
s_k = 1/2π i∮_Cω^k q(ω)
dω.
The zeros ω^zero_m inside the contour are also given in this way, except that
the quantity q(ω)^-1 is considered for the coefficients instead of q(ω).
Note that this type of approach has inspired a family of numerical methods
to reliably evaluate all zeros and poles in a given bounded domain.
The methods are an active area of research in numerical mathematics, where, e.g., numerical stability,
error bounds, and adaptive subdivision schemes are investigated <cit.>.
To compute poles and zeros, the coefficients of the Hankel
matrices can be approximated by numerical integration <cit.>,
where the quantity of interest q(ω) is calculated
by computing 𝐄(𝐫,ω) for complex frequencies
on the integration contour C.
The electric field 𝐄(𝐫,ω) can be obtained by numerically
solving Maxwell's equation given in Eq. (<ref>).
The quantity q(ω)^-1 is immediately available by inverting the scalar quantity q(ω).
Computing the different contour integrals for each of the coefficients requires no additional computational
effort since the quantity q(ω) needs to
be calculated only once for each of the integration points.
The integrands differ only in the weight functions ω^k.
Information on the numerical realization can be found in Ref. <cit.>.
Further, the data publication <cit.> contains software
for reproducing the results of this work based on an interface to the finite-element-based
Maxwell solver JCMsuite.
Poles and reflection zeros of a metasurface.—We apply this
approach to determine the poles ω^pole_m and reflection zeros
ω^zero_m of the metasurface sketched in Fig. <ref>(a).
Figure <ref>(a) shows
the geometry of the nanostructures forming the metasurface,
including the parameters chosen for the numerical simulation.
The metasurface is illuminated by a plane wave at normal incidence from above.
For the investigation of the reflected electric field, we consider the Fourier transform
of 𝐄(𝐫,ω_0) <cit.>. Due to sub-wavelength periodicity,
the resulting upward propagating Fourier spectrum consists of only one term,
the zero-order diffraction coefficient Q(ω_0).
Solving the generalized eigenproblem given by Eq. (<ref>) with
the analytical continuation q(ω) of Q(ω_0)
gives the poles ω^pole_m and the reflection
zeros ω^zero_m of the illuminated metasurface.
We emphasize that Eq. (<ref>) provides an expression
of both, poles and reflection zeros, and that
the numerical implementation does not pose any difficulties.
Figure <ref>(b) shows the integration contour C and the computed poles and zeros.
The contour-integral-based coefficients of the Hankel matrices in Eq. (<ref>)
allow to apply the approach of direct differentiation <cit.>.
When the Fourier coefficients q(ω̂_k) are calculated
at the integration points ω̂_k on the contour C,
also their sensitivities ∂ q/ ∂ p with respect to geometry, material, or source
parameters p can be evaluated without significant additional computational effort.
The sensitivities of the zeros can be extracted in the same way
as the sensitivities of the poles can be extracted <cit.>.
Figure <ref>(c) sketches the sensitivities ∂ω^pole_1/ ∂ p_k
and ∂ω^zero_1/ ∂ p_k with respect
to the upper radius p_1 and the height p_2 of the silicon cones of the metasurface.
With 64 integration points, it is possible to compute poles,
zeros and their sensitivities with high accuracies, see Table <ref>.
Modal expansion in the complex frequency plane.—The residues
a_m = 1/2 π i∮_C_m q(ω)dω,
where C_m are contours enclosing the single eigenvalues ω_m from Eq. (<ref>),
can be used as a selection criterion for meaningful eigenvalues ω_m.
Eigenvalues with large a_m are prioritized, while ω_m with small a_m are likely
to be unphysical eigenvalues because either M is chosen larger than the actual number of eigenvalues
within the contour or they are not significant with respect to the quantity of interest.
Correspondingly, the choice of a specific source in Eq. (<ref>) allows to regard only a subset of
eigenvalues of the considered physical system <cit.>.
Note that, for simple eigenvalues, the residues are
directly available, given by diag(a_1,…,a_M) = X^T H X,
where X is suitably scaled <cit.>.
Moreover, with the poles ω^pole_m
and the corresponding residues a_m, the modal expansion
of the Fourier coefficient,
q(ω) = ∑_m=1^M q_m(ω) + q_bg(ω),
q_m(ω) = -a_m/ω^pole_m - ω,
q_bg(ω) = 1/2 π i∮_C q(ξ)/ξ - ωdξ,
can be performed, where
q_m(ω) are Riesz-projection-based modal contributions and q_bg(ω) is the
background contribution <cit.>.
Figure <ref>(a) shows the phase distribution Arg(q(ω)) of the electric field
reflected from the metasurface shown in Fig. <ref>(a).
This is obtained by evaluating the modal expansion given by Eq. (<ref>)
for the contour C shown in Fig. <ref>(b).
A phase retardation of 2π for a real frequency scan,
which is often required for the design of metasurfaces,
is obtained when a pair of pole ω^pole_m and zero ω^zero_m
is separated by the real axis <cit.>.
Figure <ref>(b) shows Arg(q_1(ω)) corresponding to the pole ω^pole_1 and
Fig. <ref>(c) shows Arg(∑_m=2^4 q_m(ω) + q_bg(ω)).
In particular, it can be observed that the zero ω^zero_1 does not appear for the
modal contribution q_1(ω), but it emerges due to interference
with the other contributions, i.e., when
∑_m=2^4 q_m(ω) + q_bg(ω)
is added to q_1(ω).
Conclusion.—We presented a theoretical formulation to determine the locations of complex-valued
singularities, including poles and zeros, of any electromagnetic quantity in photonic systems.
The zeros can be determined by contour integration,
in the same way as the poles corresponding to resonances can be computed.
We also presented modal expansions in the complex frequency plane of the phase
of the field reflected from a metasurface, where the total expansion validated the
computed reflection zeros.
The different modal contributions give insight into the emergence of the reflection zeros
by interference of various expansion terms.
Furthermore, computation of partial derivatives of the reflection zeros was demonstrated.
The approach can easily be transferred to other physical systems
supporting resonances, e.g., to quantum mechanics and acoustics.
The theory essentially relies on detecting singularities of meromorphic functions
in the complex plane.
Therefore, it can be easily extended to compute other
quantities, e.g., transmission zeros, scattering cross sections
of isolated particles, or maximal chiral response of nanoassemblies.
The real-frequency response of metasurfaces can in many cases be
significantly impacted by reflection and transmission zeros,
since these typically lie close to the real axis or can even cross
the real axis with slight parameter variations.
Therefore, a precise quantification of the sensitivities of
reflection and transmission zeros or also of other physical quantities
is essential for gradient-based optimization of
photonic metasurfaces or other resonant or non-resonant systems.
We expect that the presented theory will enable new computer-aided design approaches.
Supplementary data tables and source code for the numerical experiments
for this work can be found in the open access data publication <cit.>.
We acknowledge funding
by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)
under Germany's Excellence Strategy - The Berlin Mathematics Research
Center MATH+ (EXC-2046/1, project ID: 390685689),
by the German Federal Ministry of Education and Research
(BMBF Forschungscampus MODAL, project 05M20ZBM),
and by the European Innovation Council (EIC) project TwistedNano
(grant agreement number Pathfinder Open 2021-101046424).
32
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Raimond et al.(2001)Raimond, Brune, and Haroche]Haroche_RevModPhys_2001
author author J. M. Raimond, author M. Brune, and author S. Haroche, 10.1103/RevModPhys.73.565 journal journal Rev. Mod. Phys. volume 73, pages 565 (year 2001)NoStop
[Nie and Emory(1997)]Nie_Science_1997
author author S. Nie and author S. R. Emory, 10.1126/science.275.5303.1102 journal journal Science volume
275, pages 1102 (year 1997)NoStop
[Senellart et al.(2017)Senellart, Solomon, and White]Senellart_2017
author author P. Senellart, author G. Solomon,
and author A. White, 10.1038/nnano.2017.218 journal journal
Nat. Nanotechnol. volume 12, pages
1026 (year 2017)NoStop
[Lalanne et al.(2018)Lalanne, Yan, Vynck, Sauvan, and Hugonin]Lalanne_QNMReview_2018
author author P. Lalanne, author W. Yan,
author K. Vynck, author C. Sauvan, and author
J.-P. Hugonin, 10.1002/lpor.201700113 journal journal Laser
Photonics Rev. volume 12, pages
1700113 (year 2018)NoStop
[Dyatlov and Zworski(2019)]Zworski_Scattering_Resonances_2019
author author S. Dyatlov and author M. Zworski, @noop title Mathematical theory of
scattering resonances (publisher American Mathematical
Society, address Providence, Rhode Island, year
2019)NoStop
[Wu et al.(2021)Wu,
Gurioli, and Lalanne]Wu_ACSPhot_2021
author author T. Wu, author M. Gurioli, and author P. Lalanne, 10.1021/acsphotonics.1c00336 journal journal ACS Photonics volume 8, pages 1522 (year 2021)NoStop
[Sauvan et al.(2022)Sauvan,
Wu, Zarouf, Muljarov, and Lalanne]Sauvan_2022
author author C. Sauvan, author T. Wu, author R. Zarouf, author
E. A. Muljarov, and author
P. Lalanne, 10.1364/OE.443656 journal journal Opt. Express volume 30, pages 6846 (year
2022)NoStop
[Nicolet et al.(2023)Nicolet, Demésy, Zolla, Campos, Roman, and Geuzaine]Nicolet_2022
author author A. Nicolet, author G. Demésy,
author F. Zolla, author C. Campos, author
J. E. Roman, and author
C. Geuzaine, 10.1016/j.euromechsol.2022.104809 journal journal
Eur. J. Mech. A Solids volume 100, pages 104809 (year 2023)NoStop
[Desoer and Schulman(1974)]Desoer_1974
author author C. Desoer and author J. Schulman, 10.1109/TCS.1974.1083805 journal journal IEEE Trans. Circuits Syst. volume 21, pages 3 (year
1974)NoStop
[Oppenheim and Verghese(2017)]Oppenheim_2017
author author A. Oppenheim and author G. Verghese, @noop title Signals, Systems and
Inference, Global Edition (publisher Pearson, year 2017)NoStop
[Butterworth et al.(1930)Butterworth et al.]Butterworth_1930
author author S. Butterworth et al., @noop journal
journal Wirel. Eng. volume 7, pages 536 (year 1930)NoStop
[Bechhoefer(2011)]Bechhoefer_2011
author author J. Bechhoefer, 10.1119/1.3614039 journal
journal Am. J. Phys. volume 79, pages 1053 (year 2011)NoStop
[Colom et al.(2023)Colom,
Mikheeva, Achouri, Zuniga-Perez, Bonod, Martin, Burger, and Genevet]Colom_2023
author author R. Colom, author E. Mikheeva,
author K. Achouri, author J. Zuniga-Perez, author
N. Bonod, author O. J. F. Martin, author S. Burger, and author P. Genevet, 10.1002/lpor.202200976
journal journal Laser Photonics Rev. volume 17, pages 2200976 (year
2023)NoStop
[Hutley and Maystre(1976)]Hutley_1976
author author M. Hutley and author D. Maystre, 10.1016/0030-4018(76)90116-4 journal journal Opt. Commun. volume
19, pages 431 (year 1976)NoStop
[Chong et al.(2010)Chong,
Ge, Cao, and Stone]Chong_2010
author author Y. D. Chong, author L. Ge, author H. Cao, and author
A. D. Stone, 10.1103/PhysRevLett.105.053901 journal journal
Phys. Rev. Lett. volume 105, pages
053901 (year 2010)NoStop
[Maystre(2013)]Maystre_2013
author author D. Maystre, 10.1016/j.crhy.2013.02.003 journal journal C. R. Phys. volume
14, pages 381 (year 2013)NoStop
[Sreekanth et al.(2018)Sreekanth, Sreejith, Han, Mishra, Chen, Sun, Lim, and Singh]Sreekanth_2018
author author K. V. Sreekanth, author S. Sreejith,
author S. Han, author
A. Mishra, author X. Chen, author H. Sun, author C. T. Lim, and author R. Singh, 10.1038/s41467-018-02860-6 journal journal Nat. Commun. volume 9, pages
369 (year 2018)NoStop
[Kravets et al.(2018)Kravets, Kabashin, Barnes, and Grigorenko]Kravets_2018
author author V. G. Kravets, author A. V. Kabashin, author W. L. Barnes, and author A. N. Grigorenko, 10.1021/acs.chemrev.8b00243 journal journal Chem. Rev. volume
118, pages 5912 (year 2018)NoStop
[Grigoriev et al.(2013)Grigoriev, Tahri, Varault, Rolly, Stout, Wenger, and Bonod]Grigoriev_2013
author author V. Grigoriev, author A. Tahri,
author S. Varault, author B. Rolly, author
B. Stout, author J. Wenger, and author N. Bonod, 10.1103/PhysRevA.88.011803
journal journal Phys. Rev. A volume 88, pages 011803(R) (year
2013)NoStop
[Bonnet-Ben Dhia et al.(2018)Bonnet-Ben Dhia, Chesnel, and Pagneux]Dhia_2018
author author A.-S. Bonnet-Ben Dhia, author L. Chesnel, and author V. Pagneux, 10.1098/rspa.2018.0050 journal
journal Proc. R. Soc. A volume 474, pages 20180050 (year 2018)NoStop
[Sweeney et al.(2020)Sweeney, Hsu, and Stone]Sweeney_2020
author author W. R. Sweeney, author C. W. Hsu, and author A. D. Stone, 10.1103/PhysRevA.102.063511 journal journal Phys. Rev. A volume 102, pages 063511 (year 2020)NoStop
[Mikheeva et al.(2023)Mikheeva, Colom, Achouri, Overvig, Binkowski, Duboz, Cueff, Fan, Burger, Alu, and Genevet]Genevet_2023
author author E. Mikheeva, author R. Colom,
author K. Achouri, author A. Overvig, author
F. Binkowski, author
J.-Y. Duboz, author
S. Cueff, author S. Fan, author S. Burger, author A. Alu, and author P. Genevet, 10.1364/opticaopen.22828976.v1 journal journal Optica Open. Preprint. (year 2023), 10.1364/opticaopen.22828976.v1NoStop
[Austin et al.(2014)Austin,
Kravanja, and Trefethen]Austin_2014
author author A. P. Austin, author P. Kravanja, and author L. N. Trefethen, 10.1137/130931035 journal journal SIAM J. Numer. Anal. volume 52, pages 1795 (year 2014)NoStop
[Delves and Lyness(1967)]Delves_1967
author author L. M. Delves and author J. N. Lyness, 10.1090/S0025-5718-1967-0228165-4 journal journal Math. Comp. volume
21, pages 543 (year 1967)NoStop
[Kravanja and Barel(2000)]Kravanja_2000
author author P. Kravanja and author M. V. Barel, @noop title Computing the Zeros of
Analytic Functions, Lect. Notes Math. 1727 (publisher
Springer, address New York, year
2000)NoStop
[Chen(2022)]Chen_2022
author author H. Chen, 10.1016/j.cam.2021.113796 journal
journal J. Comput. Appl. Math. volume
402, pages 113796 (year 2022)NoStop
[Trefethen and Weideman(2014)]Trefethen_SIAM_Trapz_2014
author author L. N. Trefethen and author J. Weideman, 10.1137/130932132 journal
journal SIAM Rev. volume 56, pages 385 (year 2014)NoStop
[Betz et al.(2021)Betz,
Binkowski, and Burger]Betz_2021
author author F. Betz, author F. Binkowski, and author S. Burger, 10.1016/j.softx.2021.100763 journal journal SoftwareX volume 15, pages
100763 (year 2021)NoStop
[Binkowski et al.(2023)Binkowski, Betz, Colom, Genevet, and Burger]Binkowski_SourceCode_Poles_Zeros
author author F. Binkowski, author F. Betz,
author R. Colom, author P. Genevet, and author S. Burger, 10.5281/zenodo.8063932 title Source code and
simulation results: Poles and zeros of electromagnetic quantities in photonic
systems, howpublished Zenodo (year 2023), note doi: 10.5281/zenodo.8063932NoStop
[Novotny and Hecht(2012)]Novotny_Hecht_2012
author author L. Novotny and author B. Hecht, @noop title Principles of
Nano-Optics, 2nd ed. (publisher Cambridge University
Press, address Cambridge, year 2012)NoStop
[Binkowski et al.(2022)Binkowski, Betz, Hammerschmidt,
Schneider, Zschiedrich, and Burger]Binkowski_CommunPhys_2022
author author F. Binkowski, author F. Betz,
author M. Hammerschmidt, author P.-I. Schneider, author
L. Zschiedrich, and author
S. Burger, 10.1038/s42005-022-00977-1 journal journal
Commun. Phys. volume 5, pages 202
(year 2022)NoStop
[Zschiedrich et al.(2018)Zschiedrich, Binkowski, Nikolay,
Benson, Kewes, and Burger]Zschiedrich_PRA_2018
author author L. Zschiedrich, author F. Binkowski, author N. Nikolay,
author O. Benson, author G. Kewes, and author
S. Burger, 10.1103/PhysRevA.98.043806 journal journal Phys.
Rev. A volume 98, pages 043806
(year 2018)NoStop
|
http://arxiv.org/abs/2307.04483v1 | 20230710111105 | Towards Hypersemitoric Systems | [
"Tobias Våge Henriksen",
"Sonja Hohloch",
"Nikolay N. Martynchuk"
] | math.SG | [
"math.SG",
"37J35 53D20 70H06"
] |
Invertible Low-Dimensional Modelling of X-ray Absorption Spectra for Potential Applications in Spectral X-ray Imaging
Raziye Kubra Kumrular and Thomas Blumensath
R. K. Kumrular and T. Blumensath are with the ISVR Signal Processing and
Audio Hearing Group, University of Southampton, Southampton SO17 1BJ, U.K.
(e-mail: [email protected] )
August 12, 2023
=============================================================================================================================================================================================================================================
This survey gives a short and comprehensive introduction to a class of finite-dimensional integrable systems known as hypersemitoric systems, recently introduced by Hohloch and Palmer in connection with the solution of the problem how to extend Hamiltonian circle actions on symplectic 4-manifolds to integrable systems with `nice' singularities.
The quadratic spherical pendulum, the Euler and Lagrange tops (for generic values of the Casimirs), coupled-angular momenta, and the coupled spin oscillator system are all examples of hypersemitoric systems.
Hypersemitoric systems are a natural generalization of so-called semitoric systems (introduced by Vũ Ngọc) which in turn generalize toric systems. Speaking in terms of bifurcations, semitoric systems are `toric systems with/after supercritical Hamiltonian-Hopf bifurcations'. Hypersemitoric systems are `semitoric systems with, among others, subcritical Hamiltonian-Hopf bifurcations'.
Whereas the symplectic geometry and spectral theory of toric and semitoric sytems is by now very well developed, the theory of hypersemitoric systems is still forming its shape. This short survey introduces the reader to this developing theory by presenting the necessary notions and results as well as its connections to other areas of mathematics and mathematical physics.
§ INTRODUCTION
Integrable Hamiltonian systems play an important role in mathematical and physical sciences. For instance, within celestial mechanics, there is the Kepler problem, and, within quantum mechanics, there is the Jaynes-Cummings model, which are both integrable. Integrable systems are very special dynamical systems exhibiting regular (as opposed to chaotic) behaviour in the sense that there exist a maximal number of (independent, see Definition <ref>) integrals of motion, allowing one to at least in principle integrate the equations of motion.
Dynamics of a finite-dimensional integrable Hamiltonian system, defined by means of a proper momentum map (see Definition <ref>), is generically constrained to n-dimensional tori, where n is the number of degrees of freedom. These tori turn out to be Lagrangian submanifolds of the underlying symplectic manifold on which the Hamiltonian system is defined, and thus an integrable system can be seen as a singular Lagrangian torus fibration over a certain subset of ^n, see in particular the papers by Mineur <cit.>, Arnol'd <cit.>, Weinstein <cit.> and Duistermaat <cit.>. This motivates one to study integrable systems using techniques from symplectic geometry.
The singular fibres of these singular Lagrangian torus fibrations reflect a non-trivial geometric or dynamical property of the underlying integrable system. The most prominent examples being the monodromy around a focus-focus point and bifurcations of Liouville tori, which we will address below.
In the context of symplectic classification of integrable systems it is known how to classify a number of different types of such (`typical') singularities: a saddle singularity (in one degree of freedom) by Dufour, Molino, and Toulet <cit.>, an elliptic singularity (in any dimension) by Eliasson <cit.>, a focus-focus singularity (in dimension 2) by <cit.>, and a parabolic singularity by Bolsinov, Guglielmi, and Kudryavtseva <cit.> and Kudryavtseva and Martynchuk <cit.>. See also the recent breakthrough results concerning symplectic classification in the real-analytic category by Kudryavtseva <cit.> and by Kudryavtseva and Oshemkov <cit.>.
In the context of global classification of integrable systems, Pelayo and Vũ Ngọc <cit.> showed that a large class of physically important systems known as semitoric systems are classified by a set of 5 invariants. This is one of the few known explicit results in the global symplectic classification of integrable systems, apart from the classical Delzant's <cit.> construction and the work of Zung <cit.> relating the semi-local (i.e. in a neighbourhood of a singular fibre) and global classification problems. We refer to Sections <ref> and <ref> for more details on semitoric systems.
What is currently missing in the literature is a detailed discussion of systems beyond semitoric type: Whereas the topological
classification of such systems is a well developed theory going back to Duistermaat and Fomenko and Zieschang (see e.g. Bolsinov and Fomenko <cit.> and the references therein), a more refined (e.g. symplectic) analysis is currently an open problem for in fact the majority of such systems. In particular, what is missing is a detailed analysis of a generalisation of semitoric systems additionally allowing hyperbolic-regular, hyperbolic-elliptic, and parabolic points, known as hypersemitoric systems. The latter class was introduced by Hohloch and Palmer
<cit.> in connection with the problem of extending Hamiltonian circle actions on symplectic 4-manifolds to integrable systems, which they solved within this class of systems, see Hohloch and Palmer <cit.> for details.
Hypersemitoric systems thus present a challenging platform for the further study by both geometers and analysists and this survey is devised as a quick introduction.
Nevertheless, note that the class of hypersemitoric systems does not include all possible singularities that may arise in 4-dimensional integrable systems: the underlying global S^1-action prevents the existence of hyperbolic-hyperbolic singularities; moreover, the definition of hypersemitoric systems excludes most of the `typical' degenerate S^1-invariant singularities, see Kalashnikov's <cit.> list. There exists another class of integrable systems, namely hyperbolic semitoric systems (cf. <cit.>), which, if one considers the union with semitoric systems, contains hypersemitoric system, see Remark <ref>. The hyperbolic semitoric systems do include all `typical' degenerate S^1-invariant singularities in Kalashnikov's <cit.> list.
§.§ Organization of the paper
The rest of this paper is organized as follows: In Section <ref>, we give the definition of (Liouville) integrability, before defining toric, semitoric, and hypersemitoric systems. Moreover, we explain some important properties of integrable systems and give a short survey over the theory of atoms and molecules. In Section <ref>, we discuss semitoric systems in detail, i.e., their symplectic classification in terms of five invariants and how one may obtain a semitoric system from a toric one. Eventually, we recall some important examples. In Section <ref>, we consider hypersemitoric systems: we first discuss flaps and pleats, which occur in the momentum image of hypersemitoric systems. Then we consider how one may obtain hypersemitoric systems from (semi)toric systems before we briefly explain an explicit example.
§.§ Acknowledgements
The authors are very grateful to Álvaro Pelayo and San Vũ Ngọc for useful comments and suggestions that helped to improve the original version of this work.
The first author was fully supported by the Double Doctorate Funding of the Faculty of Science and Engineering of the University of Groningen.
Moreover, all authors were partially supported by the FNRS-FWO Excellence of Science (EoS) project `Symplectic Techniques in Differential Geometry' G0H4518N.
§ DEFINITIONS, CONVENTIONS, AND BACKGROUND
In this section, we give an outline of integrability with an emphasis on integrable systems defined on 4-manifolds and admitting a global effective Hamiltonian circle action. Hypersemitoric systems are a certain class of systems of this type. We start by recalling the classical Arnol'd-Liouville-Mineur theorem, and then move from toric to semitoric to hypersemitoric systems. We also show how the theory relates to the general frameworks of monodromy and bifurcations of Liouville tori, i.e., Fomenko-Zieschang theory.
§.§ Integrable systems
Let (M, ω) be a symplectic manifold of dimension 2n. Since the symplectic form is non-degenerate, for any function f ∈ C^∞(M,), there exists a unique vector field X_f, called the Hamiltonian vector field of f, such that ι_X_fω = - df. The function f is called the Hamiltonian, and ż = X_f(z) is called a Hamiltonian system, sometimes briefly denoted by X_f. For two Hamiltonians f,g ∈ C^∞(M,), the Poisson bracket is defined by {f, g} := ω(X_f, X_g). If {f, g} = 0, then f and g are said to Poisson commute. Note that {f, g} = X_f(g). If f and g Poisson commute, then g is called a (first) integral of X_f.
A Hamiltonian system X_H on a 2n-dimensional symplectic manifold (M, ω) is said to be completely integrable (or briefly integrable) if there exist n functionally independent integrals f_1 := H, f_2,…,f_n of X_H, i.e. their gradients are almost everywhere linearly independent on M, the integrals all Poisson commute with each other, and the flows of X_f_1, …, X_f_n are complete.
A shorter notation is (M, ω, F=(f_1,…,f_n)) and F is often referred to as the momentum or integral map of the system.
A point p∈ M is regular if the rank of DF_p is maximal and singular otherwise. A value of F is regular if all points in the preimage are regular, and singular otherwise.
Similarly, one defines what it means for a fibre F^-1(r) of F to be regular, resp., singular and for a leaf of F, i.e. a connected component of a fibre, to be regular, resp. singular.
The Arnol'd-Liouville-Mineur theorem <cit.> describes the regular leaves of the foliation generated by the momentum map of a 2n-dimensional integrable system. Each regular leaf is a Lagrangian submanifold, and if the leaf is connected and compact, then it is diffeomorphic to an n-torus T^n. Such a foliation will be called a Lagrangian torus fibration. Let r ∈^n be a regular value for the momentum mapping F, and let F^-1(r) be a connected and compact fibre, and hence diffeomorphic to T^n, and let U be a tubular neighbourhood of F^-1(r). The Arnol'd-Liouville-Mineur theorem also tells us that U is diffeomorphic to V × T^n, where V is an open set of ^n. On V × T^n, there exists coordinates I_1, …, I_n, ϕ_1, …, ϕ_n, called action-angle coordinates. Here each I_i for i = 1, …, n is a function of the f_i's, whilst each ϕ_i is a standard angle coordinate on T^n. In action-angle coordinates, the symplectic form becomes ω = ∑ dϕ_i∧ dI_i. Note that, in general, action-angle coordinates only exist locally. Duistermaat <cit.> showed that there can exist obstructions to the global existence of action-angle coordinates in terms of the (Hamiltonian) monodromy and the Chern class on the topological level as well as the Lagrangian class on the symplectic level.
For us, monodromy will play an essential role so that we will recall its definition here; for more detail see <cit.>. Let F : M → B be a Lagrangian torus fibration over an n-dimensional manifold B and denote by R ⊆ B the set of the regular values of F.
Then there exists a natural covering
⋃_r ∈ R_1(F^-1(r)) → R,
where _1(F^-1(r)) is the first homology group of F^-1(r) with integer coefficients.
Because of this, there is a natural representation of π_1(R) into the group SL(n, ) of automorphisms of the lattice
_1(F^-1(r)) ≃ℤ^n. This representation is called the Hamiltonian monodromy of F : M → B (or of F : M → R). Thus, to any loop γ in R, one can assign an n× n integer matrix called the monodromy or the monodromy matrix along γ.
Note that Lagrangian torus fibrations are allowed to have singular points and these are precisely the points that encode essential properties of the underlying integrable system. One has in particular been interested in non-degenerate singular points, i.e. points for which the Hessians of the integrals span a Cartan subalgebra in the real symplectic Lie algebra sp(2n, ) (cf. Bolsinov and Fomenko <cit.>). Locally one can describe such singularities by local normal forms (cf., among other, the works by Eliasson <cit.>, Miranda and Zung <cit.>, and and Wacheux <cit.>): in a neighbourhood U of a non-degenerate singular point, one can find local symplectic coordinates (x_1, …, x_n, ξ_1, …, ξ_n) such that the symplectic form takes the form ω = ∑_i=1^n dx_i∧ dξ_i in U, and n functionally independent smooth integrals q_1, …, q_n : U → Poisson commuting with all f_1, …, f_n such that q_i is one of the following possible components:
* regular component: q_i = x_i,
* elliptic component: q_i = 1/2(x_i^2 + ξ_i^2),
* hyperbolic component: q_i = x_iξ_i,
* focus-focus components (exist in pairs): q_i = x_iξ_i + x_i+1ξ_i+1 and q_i+1 = x_iξ_i+1 - x_i+1ξ_i.
We will eventually focus on 4-dimensional integrable systems. In that case, the following six different types of non-degenerate singular points can occur:
* rank 0: elliptic-elliptic, hyberbolic-hyperbolic, elliptic-hyperbolic and focus-focus,
* rank 1: elliptic-regular and hyperbolic-regular.
Williamson <cit.> (see also Bolsinov and Fomenko <cit.>) showed that to determine the type of a non-degenerate rank 0 singular point of a 4-dimensional integrable system (M, ω, F=(f_1, f_2)), it is sufficient to find the eigenvalues for the Hessian of the linear combination c_1 f_1 + c_2 f_2 for generic c_1, c_2 ∈ at this singular point since
* elliptic components have pairs of purely imaginary eigenvalues,
* hyperbolic components have pairs of purely real eigenvalues,
* focus-focus components have quadruples of complex eigenvalues with non-zero real- and imaginary parts.
Note also that, if λ is an eigenvalue of multiplicity k, then so are -λ, λ, and -λ (cf. van der Meer <cit.>).
Concerning monodromy, we note that if Λ
is a (compact) leaf containing n singular points of which all are of focus-focus type, then it has been shown that the monodromy around Λ is given by
M = [ 1 n; 0 1 ],
see the works by Matsumoto <cit.>, Lerman and Umanskii <cit.>, Matveev <cit.>, and Zung <cit.>.
This result will be drawn on again in our discussion of semitoric and hypersemitoric systems.
§.§ Toric systems
Let us start with the `easiest' class of integrable systems:
Let (M,ω,F) be an integrable system with M compact and connected. If all integrals of (M,ω,F) generate an effective S^1-action, then the system is said to be a toric system.
Atiyah <cit.> and Guillemin and Sternberg <cit.> showed that the image of the momentum map of a toric system is a convex polytope, called the momentum polytope. Later, Delzant <cit.> showed that toric systems are classified up to isomorphism by their momentum polytope. Delzant's classification was then extended to non-compact manifolds by Karshon and Lerman <cit.>. Note that the singular points of a toric system are all non-degenerate and only contain components of elliptic or regular type.
§.§ Semitoric systems
Delzant's <cit.> classification of toric manifolds has been generalized by Pelayo and Vũ Ngọc <cit.> together with Palmer and Pelayo and Tang <cit.> to the following class of integrable systems, called “semitoric systems”. Semitoric systems are a natural class of systems, generalizing toric systems by relaxing the assumption of periodicity on one of the integrals defining the system. Semitoric systems are closely related to so called almost-toric system, see for instance Symington <cit.> and Vũ Ngọc <cit.>. The notion “semitoric” is natural, and has been used in different contexts, including symplectic geometry of Hamiltonian torus action by Karshon and Tolman <cit.>, integrable systems Vũ Ngọc <cit.> and Pelayo and Vũ Ngọc <cit.>, partially equivariant embedding problems in toric geometry by Pelayo <cit.>, and mathematical physics by Martini and Taylor <cit.>. We refer to Pelayo <cit.> for further discussion and references.
Let (M, ω, F=(J,H)) be a 4-dimensional integrable system, where M is connected. Then (M, ω, F=(J,H)) is a semitoric system if
* J is proper and generates an effective S^1-action,
* F has only non-degenerate singularities (if any) and none of them admit hyperbolic components.
Note that, under the assumptions of Definition <ref>, Vũ Ngọc <cit.> showed that the fibres of F are connected, thus generalizing the connectivity statement from the toric case as shown by Atiyah <cit.> and Guillemin and Sternberg <cit.>.
The main difference between toric and semitoric systems is the possible appearance of focus-focus singular points. Note that if c ∈ F(M) is a focus-focus singular value, then its preimage F^-1(c) has the shape of a so-called pinched torus where the number of pinches equals the number of focus-focus points in the fibre, cf. for instance Bolsinov and Fomenko <cit.>.
Vũ Ngọc <cit.> showed that one can associate an equivalence class of polygons with the image of the momentum map of a semitoric system. But unlike to the toric case, this is not enough to classify semitoric systems. Pelayo and Vũ Ngọc <cit.> were able to classify so-called simple semitoric systems, i.e. semitoric systems for which each fibre of J contains at most one focus-focus point, by formulating the following five invariants:
* the number of focus-focus points,
* the Taylor series or singularity type invariant,
* the polygon invariant,
* the height invariant, and
* the twisting index invariant.
Palmer, Pelayo and Tang <cit.> extended the result to the non-simple case, building on the symplectic classification of multi-pinched focus-focus fibres by Pelayo and Tang <cit.>.
The five invariants will be discussed further in Section <ref>, where also two examples will be covered, namely the coupled angular momenta (Section <ref>), and an example for which the polygon takes the shape of an octagon (Section <ref>). Other important examples of semitoric systems are the spherical pendulum (cf. Dullin <cit.>) and the Jaynes-Cummings model (cf. Babelon, Cantini and Douçot <cit.>, Pelayo and Vũ Ngọc <cit.>, and Alonso, Dullin and Hohloch <cit.>).
§.§ Hypersemitoric systems
Hohloch and Palmer <cit.> considered a yet more general class of integrable systems than semitoric systems by allowing for singular points with hyperbolic components and certain degenerate singular points, namely so-called parabolic singular points: a singular point p of an integrable system (M, ω, F=(f_1,f_2)) is parabolic if there exists a neighbourhood U ⊂ M of p with (generally non-canonical) coordinates (x, y, λ, ϕ) and functions q_i = q_i(f_1,f_2) for i ∈{ 1,2} of the form
q_1 = x^2 - y^3 + λ y q_2 = λ.
A coordinate free definition is given in Bolsinov, Guglielmi and Kudryavtseva <cit.>.
Note that the same normal form in fact applies to parabolic orbits, which means that from the smooth point of view, there is only one type of degenerate
singularities appearing in hypersemitoric systems (for more details, see Kudryavtseva and Martynchuk <cit.>).
Parabolic points are also known under the name of cusps or cuspidal points. Moreover, parabolic points naturally appear as transition points between (families of) elliptic-regular and hyperbolic-regular points.
The following definition generalizes the natural notions of toric and semitoric systems we have seen earlier in this paper, and appears in recent work by Hohloch and Palmer <cit.>, following also work by Kalashnikov <cit.> as explained below.
A 4-dimensional integrable system (M, ω, F=(J,H)) is called hypersemitoric if
* J is proper and generates an effective S^1-action,
* all degenerate singular points of F (if any) are of parabolic type.
Note that the existence of a global S^1-action prevents the appearance of hyperbolic-hyperbolic singularities in a hypersemitoric system. The original motivation for introducing this class, however, comes from the result of Hohloch and Palmer <cit.> stating that any 4-dimensional Hamiltonian system X_J which generates an effective S^1-action is extendable to a hypersemitoric system (M, ω, (J,H)). Furthermore, the set of hypersemitoric systems is open in the set of 4-dimensional integrable systems with a global effective Hamiltonian circle action (see Kalashnikov <cit.>).
Dullin and Pelayo <cit.> showed that, starting with a semitoric system, one can use a subcritical Hamiltonian-Hopf bifurcation (which transforms a focus-focus point to an elliptic-elliptic point, see Sections <ref> and <ref>) to generate a flap (see Section <ref>) on said system, thus creating a hyperbolic semitoric system (cf. <cit.>). Although the name of this type of system is very similar to the name hypersemitoric, they are defined differently. Hyperbolic semitoric systems requires the same conditions as hypersemitoric systems for the integral J generating a circle action. However, the set of hyperbolic singularities in hyperbolic semitoric systems are required to be non-empty, and the set of degenerate singularities is required to be isolated, not necessarily of parabolic type. Nevertheless, many hypersemitoric systems can thus be generated by performing subcritical Hamiltonian-Hopf bifurcations, together with so-called blow-ups (also known as corner chops, see for instance Holoch and Palmer <cit.> and references therein) on the (newly generated) elliptic-elliptic points.
§.§ Topological invariants: atoms and molecules
Finally, we will recall a complete topological invariant for a generic isoenergy level of a two degree of freedom integrable system which was introduced by Fomenko and Zieschang <cit.>. This invariant is intimately linked to hyperbolic-regular and elliptic-regular points
and naturally appears in (hyper)semitoric systems as well as in systems without a global S^1-action, which in fact form a majority of known integrable systems (including the Kovalevskaya top and many other integrable cases in rigid body dynamics, various geodesic flows, billiards, etc.). We will follow the presentation of Bolsinov and Fomenko <cit.>.
Let f be a Morse function on a manifold M. Note that the leaves of f foliate the manifold. Let x ∼ y if and only if x and y are in the same leave of f and denote by Γ := M / ∼ the space of leaves of f.
Since f is a Morse function Γ is in fact a graph, called the Reeb graph of f on M where singular leaves give rise to the vertices. There are two types of vertices:
* a vertex is called an end vertex if it is the end of one edge only,
* otherwise it is called an interior vertex.
Note that the end vertices of a Reeb graph correspond to local minima and maxima (thus elliptic points) of the Morse function, whilst the interior vertices correspond to saddle-points (thus hyperbolic points).
Let f M →ℝ be a Morse function on a 2-dimensional surface M. An atom is a tubular neighbourhood denoted by P^2 of a singular fibre f^-1(c) together with the fibration f P^2 →ℝ on this neighbourhood. The atom is orientable if the surface P^2 is orientable and non-orientable otherwise. We now give a brief overview of the so-called simple atoms, which are atoms whose singular fibres contain only one singular point and which are referred to as atom A, atom B and atom B. There exist many more atoms, which are defined similarly to the aforementioned ones. A more detailed exposition can be found in Bolsinov and Fomenko <cit.>.
Let us first consider atom A, which represents the case of local minima or maxima of the function f. The Reeb graph of the atom is a line segment illustrating the energy levels of f together with an arrow pointing in the direction of increasing energy, and a symbol A illustrating the extrema. Thus, there exist two atoms of type A of which the associated Reeb graphs are sketched in Figure <ref>.
One can do a similar construction for saddles. Note, however, that there exist both orientable and non-orientable saddles, and they lead to atoms of type B and B, respectively. One can generate such atoms by considering a cylinder and gluing a strip to one of its ends (more specifically, attaching an index-1 handle). If the strip is not twisted, this can be deformed to an orientable saddle, whilst if it is twisted, it can be deformed to a non-orientable saddle. Figure <ref> shows the Reeb graphs of these atoms.
There also exist atoms with more than one singular point in the singular fibre (cf. Bolsinov and Fomenko <cit.>). However, these atoms still form two main types: the first type consists only of atoms A, whilst the second type consists of all other atoms (which are in fact saddle atoms).
Let now (M, ω, (H,f)) be an integrable system on a symplectic 4-manifold M and let Q = {x ∈ M | H(x) = constant} be a `generic' so-called isoenergy 3-surface (see Bolsinov and Fomenko <cit.> for the exact conditions on Q). Let Q/∼ be the space of leaves, which can also be pictured as a (Reeb) graph where the vertices correspond to the singular leaves. Now, the singular leaves correspond to so-called 3-atoms, which are defined similarly to the atoms we saw before, but now the neighbourhoods are 3-dimensional. It turns out that these 3-atoms are in one-to-one correspondence with the set of 2-atoms possibly endowed with a finite number of marked points or stars – corresponding to exceptional fibres of the Seifert fibration naturally associated to a 3-atom, see Bolsinov and Fomenko <cit.>. For simplicity, 2-atoms with stars will also be referred to as 2-atoms. Thus, we will consider the graph defined by Q/∼ with the vertices corresponding to 2-atoms.
This graph is called the molecule of (M, ω, (H,f)) on Q.
A molecule contains a lot of information of the foliation of the isoenergy surface Q. But this type of molecule consists of atoms glued together so far without the knowledge of how this gluing is performed. Keeping track of the gluing gives us the final piece of information that we need to give a molecule the meaning of an invariant: the gluing is performed by the so-called gluing matrix
C_i =
[ α_i β_i; γ_i δ_i ]∈(2, ℤ), C = -1.
To the gluing matrix C_i, there are two invariants assigned, namely
r_i :=
α_i/β_i 1 β_i≠ 0,
∞ β_i = 0
and ϵ_i :=
sign β_i β_i≠ 0,
sign α_i β_i = 0.
These two invariants alone are not enough for our purposes, and so one more invariant has to be introduced. An edge e_i of a molecule W is called infinite, if r_i = ∞, and otherwise finite. Cutting the molecule along finite edges splits it into several connected components. The components not containing any atoms of type A are called families. Let U_k be a family. Recall that the edges of atoms are `oriented' by arrows. An edge in U_k is said to be outgoing if the arrow points from a vertex inside U_k to a vertex outside U_k. In the opposite case an edge in U_k is called incoming. If the edge joins a vertex inside U_k to another vertex inside U_k, then the edge is called interior. To each edge e_i in U_k we assign the following integer:
Θ_i: =
⌊α_i/β_i⌋, e_i is an outgoing edge,
⌊-δ_i/β_i⌋, e_i is an incoming edge,
-γ_i/α_i, e_i is an interior edge.
With this, we construct the third, and final, invariant we want to associate to W, namely
n_k := ∑_e_i∈ U_kΘ_i∈.
The invariants r_i, ϵ_i and n_k will be called marks. One can now endow the molecule W with the three marks defined above, and define the marked molecule as the quadruple W^* := (W, r_i, ϵ_i, n_k). Fomenko and Zieschang <cit.> showed that two integrable systems on generic isoenergy 3-surfaces are Liouville equivalent if and only if their marked molecules coincide. Marked molecules are also known as Fomenko-Zieschang invariants. The collection of such marked molecules can be thought of as a topological portrait of the system, which contains more information than for example the topological types of the individual singular leaves/fibres.
Since hypersemitoric systems only contain elliptic, hyperbolic-regular, focus-focus and parabolic points, but no hyperbolic-hyperbolic ones, one can show that marked loop molecules form complete local topological invariants of the torus fibration of a hypersemitoric system. In other words, the loop molecules around a given
singularity of the hypersemitotic system determine its topological type.
Note that the same is not true for general hyperbolic-hyperbolic singularities of
integrable 2 degree of freedom systems; see Bolsinov and Oshemkov <cit.>.
§ SEMITORIC SYSTEMS
In this section, we will briefly recall the construction of the five invariants of semitoric systems introduced by Pelayo and Vũ Ngọc <cit.> and its generalizations, then observe transitions from toric to semitoric systems by creating focus-focus points, and eventually consider some explicit examples.
Two semitoric systems (M_1,ω_1,(J_1,H_1)) and (M_2,ω_2,(J_2,H_2)) are said to be isomorphic if there exists a symplectomorphism φ : M_1→ M_2 such that φ^*(J_2,H_2) = (J_1,f(J_1,H_1)) for some smooth function f such that ∂ f/∂ H_1 > 0. Since semitoric systems always come with a smooth, globally defined action J, this definition is basically saying that two semitoric systems are equivalent if and only if the corresponding Lagrangian fibrations are fibrewise symplectomorphic (up to possibly changing J to ± J +).
Pelayo and Vũ Ngọc <cit.> showed that two simple semitoric systems are isomorphic if and only if all five invariants (defined below) are equal for the two systems. The simplicity assumption has been removed from the classification by Palmer, Pelayo and Tang <cit.>, but the invariants in the non-simple case are more complicated, and we do not present them here.
§.§ The five semitoric invariants
Let (M, ω, F=(J,H)) be a simple semitoric system. We will use the identification S^1 = /2π in what follows.
Let us now explain each of the five invariants in more detail.
§.§.§ Number of focus-focus points
Vũ Ngọc <cit.> proved that M has a finite number of focus-focus singular points. Denoting this number by n_FF, one has thus 0 ≤ n_FF < ∞. Then n_FF forms an invariant for semitoric systems (cf. Pelayo and Vũ Ngọc <cit.>).
§.§.§ Taylor series invariant
Denote the focus-focus points of (M, ω, F=(J,H)) by m_i for 1 ≤ i ≤ n_FF. Let us now consider one focus-focus point, and denote it by m without the index, to simplify the notation. Recall from Section <ref> that there exists a neighbourhood U of m with symplectic coordinates (x,y,ξ,η) such that the quadratic parts of J and H span a Cartan subalgebra with the following basis:
q_1 = xξ + yη, q_2 = xη - yξ.
Note that the Hamiltonian flow generated by q_2 is 2π-periodic.
We now follow the exposition in Vũ Ngọc <cit.>:
Let Λ_z = F^-1(z) be a regular fibre near the singular fibre containing m. For any point A ∈Λ_z, denote by τ_1(z) the first return time of the flow generated by X_H to the X_J-orbit through A, and let τ_2(z) ∈/2π be the time it takes to close up this trajectory under the flow of X_J. Vũ Ngọc <cit.> showed that, for some determination of the complex logarithm ln z, then
σ_1(z) := τ_1(z) + (ln z), σ_2(z) := τ_2(z) - (ln z)
extends to smooth and single-valued functions in a neighbourhood of c = F(m). Moreover, σ := σ_1 dz_1 + σ_2 dz_2 yields a closed 1-form under the identification z=(z_1, z_2) ∈^2.
Define S via dS = σ and S(c) = 0 and denote the Taylor series of S at z = c by (S)^∞. The Taylor series invariant, for all focus-focus points m_i, 1 ≤ i ≤ n_FF, is then given by the n_FF-tuple ((S_i)^∞)_i=1^n_FF.
There is another way to define the Taylor series invariant. Let γ_z^1 and γ_z^2 be a basis of the first homology group of the torus Λ_z that varies smoothly with the base point z such that γ_z^1 is a representative of the cycle corresponding to the (periodic) flow of J and γ_z^2 represents a homology cycle obtained by first moving with the flow of X_H using time
τ_1(z) and then with the flow of X_J using time τ_2(z).
Now consider the action integral
𝒜(z) := ∫_γ_z^2α,
where α is a primitive of ω on some neighbourhood of Λ_z. Then one finds for z≃(z_1,z_2) ∈^2
d𝒜(z) = τ_1(z) dz_1 + τ_2(z) dz_2.
One can in fact interpret S as a regularised action integral via
S(z) = 𝒜(z) - 𝒜(c) + (z ln z - z).
Note that the above construction involves a certain number of choices which have to be made compatibly with the construction of the polygon invariant and the twisting index invariant below. The exact dependencies are explained in detail in the forthcoming article by Alonso, Hohloch, and Palmer <cit.>.
§.§.§ Polygon invariant
Let m_1, …, m_n_FF be the focus-focus points and denote by c_1:=F(m_1), …, c_n_FF:= F(m_n_FF) their values ordered such that the first coordinate of the focus-focus values increases. Denote by B := F(M) the image of the momentum map. Vũ Ngọc <cit.> showed that the set B_r ⊆ F(M) of regular values of F coincides with the set int B ∖{c_1, …, c_n_FF}. One can render B_r simply connected by making a vertical cut from each focus-focus value c_i either upwards or downwards to the boundary of F(M).
By the Arnol'd-Liouville theorem, the momentum map induces an integral affine structure on B (which in general does not agree with the one induced by the inclusion of B into ^2). Recall that affine transformations leaving a vertical line invariant arise from vertical translations composed with a matrix of the form
T^k := [ 1 0; k 1 ]
with k ∈. Now denote by l_i⊂^2 the vertical line through the focus-focus singular value c_i∈^2. This line splits ^2 into two half-spaces. For k ∈, let t_l_i^k : ^2→^2 be the map that leaves the left half-space invariant and shears the right half-space by T^k. We accommodate now all focus-focus singular values by setting 𝐤 := (k_1, …, k_n_FF) and defining t_𝐤 := t_l_1^k_1∘…∘ t_l_n_FF^k_n_FF.
For each 1 ≤ i ≤ n_FF, let ϵ_i∈{-1, +1}, and denote by l_i^ϵ_i the vertical half line starting at c_i, going upwards if ϵ_i = +1, and downwards if ϵ_i = -1, and let l^ϵ := l_1^ϵ_1∪ … ∪ l_n_FF^ϵ_n_FF be the union of the lines running through all focus-focus values for a choice of ϵ := (ϵ_1, … , ϵ_n_FF). Then the set B ∖ l^ϵ is simply connected for all possible choices of ϵ_i.
Vũ Ngọc <cit.> showed that there exists a homeomorphism f:=f_ϵ : B →^2 depending on the choices of ϵ and preserving J such that f(B) is a rational convex polygon.
Restricted to B∖ l^ϵ, the homeomorphism f becomes a diffeomorphism onto its image which sends the integral affine structure of B_r ∖ l^ϵ to the integral affine structure of ^2.
The map μ := f ∘ F is called a generalized toric momentum map for (M, ω, F=(J,H)) (cf. Pelayo and Vũ Ngọc <cit.>).
In order to turn the polygon Δ := μ(M) into an invariant of the underlying semitoric system one needs to get rid of the choices involved in the construction of Δ. This is done by means of a group action: consider the group 𝒢 := {T^k| k ∈} and the action of the group {-1, +1}^n_FF×𝒢 on (Δ, (l_i)_i=1^n_FF, (ϵ_i)_i=1^n_FF) given by
((ϵ'_i)_i=1^n_FF, T^k) ·(Δ, (l_i)_i=1^n_FF, (ϵ_i)_i=1^n_FF) := (t_𝐮(T^k(Δ)), (l_i)_i=1^n_FF, (ϵ'_iϵ_i)_i=1^n_FF)
where 𝐮 = ((ϵ_i- ϵ'_i)/2)_i=1^n_FF. Then the polygon invariant is the orbit of (Δ, (l_i)_i=1^n_FF, (ϵ_i)_i=1^n_FF) under the above action (cf. Pelayo and Vũ Ngọc <cit.>).
§.§.§ Height invariant
For i ∈{1, …, n_FF}, consider the focus-focus singular points m_i and their images c_i := F(m_i) and let μ and Δ be as in Section <ref>. The height (or the volume) invariant, as introduced by Pelayo and Vũ Ngọc <cit.>, is given by the n_FF-tuple (h_1, …, h_n_FF) with
h_i := pr_2(μ(m_i)) - min_s ∈ l_i∩Δpr_2(s),
where pr_2 : ^2→ is the projection onto the second coordinate (in <cit.> it is explained how this height invariant corresponds to the volume of certain submanifolds, and hence it is sometimes called the volume invariant). The function h_i thus measures the distance between the focus-focus value in the polygon Δ=μ(M) and its lower boundary. Furthermore, h_i is independent of the choice of the generalized toric momentum map μ, since it can also be seen as the symplectic volume of certain level sets.
§.§.§ Twisting index invariant
Let U_i be a neighbourhood of a focus-focus singular point m_i∈ F^-1(c_i), and let V_i = F(U_i). Vũ Ngọc and Wacheux <cit.> showed that there exists a local symplectomorphism Ψ : (^4, ω_0) → (M, ω) sending the origin to m_i, and a local diffeomorphism G : ^2→^2 sending 0 to F(m_i) such that F ∘Ψ = G ∘ q_i, where q_i = (q_i^1, q_i^2) is given by (<ref>). Recall that q_i^2 generates a circle action, so it must correspond to J. If necessary, after composing Ψ with either/both of the canonical transformations (x, ξ) ↦ (-x, -ξ) and (x, y, ξ, η) ↦ (-ξ, -η, x, y), one finds that G is of the form
G(q_i^1, q_i^2) = (q_i^2, G_2(q_i^1, q_i^2)),
where [G_2]q_i^1(0) > 0. We will extend G_2(q_i^1, q_i^2) to another Hamiltonian function G_2(H, J), such that they are equal at their restriction to U_i. Here (H, J) is a new momentum map for the semitoric system, and G_2 : ^2→ is some function to be discussed further below.
Recall the action integral introduced in the construction of the Taylor series invariant (see Subsection <ref>):
𝒜_i(z) := ∫_γ_i, z^2α.
Let G_i(z) := 𝒜_i(z) - 𝒜_i(c_i) for i = 1, …, n_FF.
Observe that G_i(0) is well defined and equal to zero since the actions 𝒜_i(z) are given by integrating a primitive 1-form over a loop on a Lagrangian torus Λ_z. Note that this could also have been seen by using the regularised action in (<ref>). Now, let us define the Hamiltonian function via H_i, p := G_i(J, H). Then lim_m → m_i H_i, p = 0. Note also that, by (<ref>), we get a Hamiltonian vector field
X_i, p = (τ_i^1∘ F) X_J + (τ_i^2∘ F) X_H.
This was discussed by Pelayo and Vũ Ngọc <cit.>. They called the momentum map ν := (J, H_i, p) the privileged momentum map for F = (J, H).
Now, let μ be a generalized toric momentum map. As μ preserves J, its components satisfy (μ_1, μ_2) = (J, μ_2). As μ_i, J and H_i,p are all action variables, there exists an invertible matrix A ∈GL(2, ) such that (X_J, X_μ_2) = A(X_J, X_i, p). The matrix has to be of the form
A =
[ 1 0; k_i 1 ],
hence X_μ_2 = k_i X_J + X_i, p.
Pelayo and Vũ Ngọc <cit.> showed that k_i does not depend on X_i, p or G_i. The integer k_i is called the twisting index. Note that, if k_i is the twisting index of m_i, then locally μ = T^k_iν. Also, if the polygon is transformed by some T^r, then ν does not change, whilst μ→ T^rμ.
Note that the twisting index depends on the polygon Δ. To introduce an actual invariant, similarly to Subsection <ref>,
we consider the orbit of (Δ, (l_i)_i=1^n_FF, (ϵ_i)_i=1^n_FF, (k_i)_i=1^n_FF) under the action of {-1, +1}^n_FF×𝒢. Specifically, with 𝐮 := (u_i)_i=1^n_FF := ((ϵ_i-ϵ_iϵ'_i)/2)_i=1^n_FF, the action is given by
((ϵ'_i)_i=1^n_FF, T^k) · (Δ, (l_i)_i=1^n_FF, (ϵ_i)_i=1^n_FF, (k_i)_i=1^n_FF)
= (t_𝐮(T^k(Δ)), (l_i)_i=1^n_FF, (ϵ'_iϵ_i)_i=1^n_FF, (k + k_i + ∑_j=1^ĩ_i u_j)_i=1^n_FF)
where we set 0=:∑_j=1^0 u_j and where ĩ_i =i or ĩ_i= i-1 depending on the choice of certain conventions.
This orbit is called the twisting index invariant (cf. Pelayo and Vũ Ngọc <cit.>).
Note that the above formula differs slightly from the original one given in Pelayo and Vũ Ngọc <cit.> by the extra term ∑_j=1^ĩ_i u_j. This term accounts for the way in which changing cut directions affects the twisting index.
Its absence in the original formula was pointed out to us by Yohann Le Floch and Joseph Palmer (for a detailed discussion, we refer to the forthcoming paper by Alonso, Hohloch, and Palmer <cit.>).
§.§ Modifications and generalizations of the five invariants
In fact, all five invariants are intimately related, and there is no need to consider them separately. Le Floch and Palmer <cit.> took three of the five semitoric invariants — the number of focus-focus points, the polygon invariant, and the height invariant — and joined them together to form a single invariant, called the marked semitoric polygon invariant. When Palmer, Pelayo and Tang <cit.> extended the classification to non-simple semitoric systems they gathered all five invariants into one big invariant, called the complete semitoric invariant.
§.§ Supercritical Hamiltonian-Hopf bifurcation
If one perturbs a toric system, one may obtain a semitoric system, in particular if an elliptic-elliptic point is transformed into a focus-focus point. Such a transformation is called a supercritical Hamiltonian-Hopf bifurcation. In
coordinate form, it can more specifically be defined as follows
(see in particular Equation (<ref>) below with a >0).
Let 𝔊 be a Lie group acting on the space of smooth real-valued functions C^∞(^n) whose action is defined by g · f(x) = f(g^-1(x)) for g ∈𝔊, f ∈ C^∞(^n) and x ∈^n. Furthermore, let [x] denote the space of polynomials on ^n, and let [x]^𝔊 be the space of 𝔊-invariant polynomials. Hilbert showed that, if 𝔊 is compact, then there exist finitely many invariant polynomials ρ_i∈[x]^𝔊 for i = 1, …, k which generate [x]^𝔊 as an algebra (cf. van der Meer <cit.>). Such invariant polynomials ρ_i are called Hilbert generators.
Let (x, y, ξ, η) be canonical coordinates on ^4 and define the following three Hilbert generators: J = x η - y ξ, X = 1/2(ξ^2 + η^2), and Y = 1/2(x^2 + y^2). When considering (hyper)semitoric systems, we will choose 𝔊 = S^1 to be given by the periodic Hamiltonian flow of X_J. Then van der Meer <cit.> showed that there exists the following equivariant normal form for a Hamiltonian-Hopf bifurcation
Ĥ_s = J + X + s Y + a Y^2,
where s, a ∈ are parameters with a ≠ 0, which we for simplicity take as a definition for this type of bifurcation.
If a > 0 the bifurcation is called supercritical, and
subcritical otherwise. Note that here the momentum map is given by (J, Ĥ_s).
Recall that the singular points in a 2-degree of freedom toric system all have only elliptic and/or regular components. If we perturb one of the integrals of a 2-degree of freedom toric system as in the above normal form, then we can make one of the elliptic-elliptic singular points turn into a focus-focus point. On the level of eigenvalues, 4 purely imaginary eigenvalues at an elliptic-elliptic point collide when the bifurcation parameter attains the value s = 0 and then
change into four complex eigenvalues (cf. van der Meer <cit.>). One can see two examples of supercritical Hamiltonian-Hopf bifurcations in Figure <ref> and Figure <ref>. The subcritical case, when the sign of a is negative, is treated in Section <ref>.
§.§ Examples
To compute the semitoric invariants explicitly for given systems has proven to be very difficult since it needs the combination of theoretical knowledge and strong computational skills.
§.§.§ Coupled angular momenta system
Consider the manifold M := S^2× S^2 and equip it with the symplectic form ω := - (R_1ω_S^2⊕ R_2ω_S^2) where ω_S^2 is the standard symplectic form on S^2 and R_1, R_2∈^>0.
When Sadovskií and Zhilinskií <cit.> studied the so-called coupled angular momenta system, they found a focus-focus point and nontrivial monodromy. Since this system is both interesting from a physics point of view and not very complicated from a mathematical point of view, it recently became a popular subject to study.
Le Floch and Pelayo <cit.> showed that the coupled angular momenta system on M, given in Cartesian coordinates by
J(x_1,y_1,z_1,x_2,y_2,z_2) := R_1(z_1-1) + R_2(z_2+1),
H(x_1,y_1,z_1,x_2,y_2,z_2) := (1-t)z_1 + t(x_1x_2 + y_1y_2 + z_1z_2),
describes a semitoric system for all t ∈∖{t^-,t^+}, where
t^± := R_2/2R_2 + R_1∓ 2√(R_1R_2).
The system has four singular points of rank 0 which are located at the top and bottom of the spheres, i.e. when (z_1,z_2) = (± 1, ± 1). Three of the points are always elliptic-elliptic, whilst (1, -1) is a focus-focus point if t^- < t < t^+ and elliptic-elliptic if t < t^- or t > t^+. Thus, the number of focus-focus points invariant is 0 if (1, -1) is elliptic-elliptic, or 1 if (1, -1) is focus-focus.
For some values of t, the moment image is plotted in Figure <ref>.
Le Floch and Pelayo <cit.> computed, for certain parameter values, the first two terms of the Taylor series, the polygon, and the height invariant for this system. The full classification was achieved by Alonso, Dullin and Hohloch <cit.>. The semitoric invariants of the coupled angular momenta system are as follows: The number of focus-focus points is either zero or one, see above. The Taylor series invariant is of the form
S(j,k) =
j arctan( R_2^2(2t - 1) - R_1R_2(t + 1) + R_1^2t/(R_1 - R_2)R_1 r_A)
+ k ln( 4 R_1^5/2 r_A^3/R_2^3/2(1 - t) t^2)
+ j^2/16 R_1^4 R_2 r_A^3( R_2^4(2t - 1)^3 - R_1R_2^3(32t^3 - 46t^2 + 17t - 1)
- 3R_1^2R_2^2t(4t^2 - 7t + 1) + R_1^3R_2(3 - 5t)^2 - R_1^4t_3)
+ jk(R_2 - R_1)/8R_1^3R_2r_A^3( R_2^2(2t - 1)^2 - 2R_1R_2t(6t - 1) + R_1^2t^2)
+ k^2/16R_1^4R_2r_A^3( R_2^4(2t - 1)^3 - R_1R_2^3(16t^3 - 42t^2 + 15t + 1)
- R_1^2R_2^2t(28t^2 - 3t -3) + R_1^3R_2t^2(13t - 3) + R_1^4t^3)
+ 𝒪(3),
where
r_A = √((R_1^2 + 4R_2^2)(t - t^-)(t^+ - t)).
The polygon and twisting index invariants are illustrated in Figure <ref>.
Set R:= R_2/R_1. Then the height invariant of the coupled angular momenta is given by
h = 2 min(R_1, R_2)
+ R_1/π t( r_A - 2 R t arctan( r_A/R - t) - 2 t arctan( r_A/R + t - 2 R t) ).
§.§.§ The (semi)toric octagon system
De Meulenaere and Hohloch <cit.> constructed a semitoric system with four focus-focus singular points. The system was created by first considering the octagon Δ obtained by chopping off the corners of the square [0, 3] × [0,3]. Since Δ turned out to be a Delzant polygon, Delzant's <cit.> construction could be used to construct a toric system which has Δ as image of the momentum map. This is done by means of symplectic reduction of ^8 (equipped with its standard symplectic structure) and yields a 4-dimensional, compact, connected, symplectic manifold (M_Δ, ω_Δ). A point on M_Δ is written as an equivalence class of the form [z] = [z_1, …, z_8] with z_i∈ for i = 1, …, 8. The (toric) momentum map F = (J, H):(M_Δ, ω_Δ) →^2 is given by
J([z_1, …, z_8]) = 1/2z_1^2,
H([z_1, …, z_8]) = 1/2z_3^2.
Denote by the real part of a complex number. By perturbing H to
H_t: = (1-2t) H + t γ( z̅_2z̅_3z̅_4z_6z_7z_8)
for 0 < γ < 1/48, De Meulenaere and Hohloch <cit.> obtained a system with momentum map (J, H_t):(M_Δ, ω_Δ) →^2 that is toric for 0 ≤ t < t^-, semitoric for t^- < t < t^+, and toric again for t^+ < t ≤ 1, where
t^- := 1/2(1 + 24 γ) and
t^+ := 1/2(1 - 24 γ).
Note that 0 < t^- < 1/2 and 1/2 < t^+ < 1. At t = 1/2, the system has two focus-focus fibres, each containing two focus-focus points, see Figure <ref>. The two fibres then have the shape of double pinched tori. Apart from one representative of the polygon invariant and the number of focus-focus point, no semitoric invariants have yet been calculated.
§.§ State of the art concerning other semitoric systems
Spread over the literature (cf. works by Babelon, Dullin, Le Floch, Pelayo, Vũ Ngọc, and others), there are various partial results concerning the computation of the semitoric invariants for certain parameter values for certain systems.
For instance, a Taylor series type invariant has been calculated by Dullin <cit.> for the spherical pendulum (which is, strictly speaking, not a semitoric system due to lack of properness).
Pelayo and Vũ Ngọc <cit.> computed the number of focus-focus points, the polygon, and the height invariant for the so-called coupled spin oscillator system. Alonso, Dullin and Hohloch <cit.> completed the set of semitoric invariants for this system by computing the Taylor series and twisting index invariant.
Both of these systems have only one focus-focus point. Hohloch and Palmer <cit.> generalized the coupled angular momenta system to a family of semitoric systems with two focus-focus points. Alonso and Hohloch <cit.> computed the polygon and height invariant for a subfamily and Alonso, Hohloch and Palmer <cit.> are currently computing its twisting index invariant.
Le Floch and Palmer <cit.> devised semitoric systems arising from Hirzebruch surfaces and computed their number of focus-focus points, the polygon invariant, and, for certain parameter values, also their height invariant.
§ HYPERSEMITORIC SYSTEMS
In this section, we give a brief overview of existing and related results concerning hypersemitoric systems. Recall that, compared to semitoric systems, a hypersemitoric system (Definition <ref>) may in addition have singular points with hyperbolic components and degenerate singular points of parabolic type.
§.§ Flaps and pleats/swallowtails
Two possibilities of how hyperbolic-regular and parabolic points occur in hypersemitoric systems are so-called flaps and pleats/swallowtails. A good exposition with examples for pleats/swallowtails can be found in Efstathiou and Sugny <cit.>, and for flaps see Efstathiou and Giacobbe <cit.>.
There are various ways to visualize flaps and pleats/swallowtails. Instead of using the image of the momentum map over which a hypersemitoric (or even more general) system gives rise to a singular fibration with possibly disconnected fibres, it makes sense to remember the branching and disconnectedness by working with the so-called bifurcation complex (also known as unfolded momentum domain).
One can either identify it with the leaf space of a system (M, ω, F=(J, H)) or describe it directly as a stratified manifold V together with a map F̃: M → V and a projection τ: V →^2 such that τ∘F̃ = F and the regular level sets of F̃ correspond to the connected components of the level sets of F. We will summarize some of their findings.
In the preimage under τ of a sufficiently small neighbourhood of a parabolic value, the bifurcation complex has two sheets: one sheet, the local base ℬ, contains regular values and a compact line segment ℒ of hyperbolic-regular values, and one sheet, the local flap ℱ, contains a line of elliptic-regular and of hyperbolic-regular values (which meet at a parabolic value) as well as regular values `between' these lines, see Figure <ref>. Both sheets intersect (or rather touch) each other along the line segment of hyperbolic-regular values including its parabolic end point. The topological boundary of ℱ consists of the line segments of elliptic-regular and hyperbolic-regular values joint at the parabolic value and a line of regular values, called the free boundary.
Flaps and pleats/swallowtails now arise as follows: Consider a system with a compact line segment ℒ of hyperbolic-regular values with parabolic end points denoted by c_1 and c_2. For i ∈{ 1,2}, let ℬ_i be their local bases and ℱ_i their local flaps.
If one glues the free boundary of ℱ_1 to the free boundary of ℱ_2, this will define a flap topology around ℒ, see Figure <ref>. If the free boundary of ℱ_1 is glued to the boundary of ℬ_2, and the free boundary of ℱ_2 is glued to the boundary of ℬ_1, this will define a pleat topology, see Figure <ref>. Efstathiou and Giacobbe <cit.> showed that the bifurcation complex in an open neighbourhood of ℒ can have either the pleat topology or the flap topology.
Efstathiou and Giacobbe <cit.> proved another interesting result:
Let p and q be coprime integers and let S^3 := { (z_1, z_2) ∈^2|z_1^2 + z_2^2 = 1 } be the unit sphere in ^2. Consider the (free) action of _p := /p on S^3 given by (z_1, z_2) ↦(exp(2 π i / p) z_1, exp(2 π i q / p) z_2). The lens space L(p,q) := S^3 / _p is the orbit space defined by this action. Then, with ℒ as above, the type of lens space L(p, 1) topologically embedded in F^-1(ℒ) determines the monodromy of the Lagrangian fibration in a neighbourhood of ℒ up to a sign determined by the choice of orientations.
§.§ Subcritical Hamiltonian-Hopf bifurcations
Recall from Section <ref>, that a semitoric system with focus-focus points may arise via supercritical Hamiltonian-Hopf bifurcations from a toric one.
Analogously, a hypersemitoric system with flaps may arise from a semitoric one with focus-focus points via so-called subcritical Hamiltonian-Hopf bifurcations by `replacing' a focus-focus point by a (small) flap, see for instance Dullin and Pelayo <cit.>.
To be more precise, recall the normal form Ĥ_s = J+ X + s Y + a Y^2 from Equation (<ref>): If the sign of a is negative, then a focus-focus point (four complex eigenvalues) will first turn into a degenerate point (two purely imaginary eigenvalues of multiplicity 2) and then will bifurcate into an elliptic-elliptic point (four purely imaginary eigenvalues) from the value of which, lying on a flap, two lines of elliptic-regular values emanate that connect the elliptic-elliptic value to the parabolic values (cf. Section <ref>). The parabolic values are connected by a line of hyperbolic-regular values.
In Figure <ref>, an example of a semitoric system that went through a subcritical Hamiltonian-Hopf bifurcation is displayed.
§.§ Atoms, molecules, and classifications
Recall from Section <ref> the notion of a marked molecule W^*, which is a complete isoenergy invariant of a 2 degree of freedom integrable system. The topology caused by the lines of elliptic-regular and hyperbolic-regular values in flaps and pleats (swallowtails) can in particular be described by marked molecules.
Here one can consider `loop molecules' (see Figure <ref>) around the parabolic values with B-atoms describing the bifurcation of one of the two lines emanating from the cusp and A-atoms the other bifurcation.
The important result in this context is that the loop molecule around the cusp is uniquely defined and moreover `knows' what happens in its vicinity, in the sense that the loop
molecule completely determines the topology of the corresponding singular torus fibration. This result directly follows from the fact that a single parabolic orbit (more precisely, the associated
compact singular fiber, which has the form of a cuspidal torus) gives rise to only one singular torus fibration up to a fibrewise homeomorphism, see Efstathiou and Giacobbe <cit.>. We conjecture that more is true in fact and there is only one
such torus fibration up to fibrewise diffeomorphisms, cf. Kudryavtseva and Martynchuk <cit.>.
A similar topological result is known for elliptic-elliptic, elliptic-hyperbolic and focus-focus singularities of integrable systems on 4-manifolds, but not so for hyperbolic-hyperbolic singularities (having multiple hyperbolic-hyperbolic points
on a singular fiber) which are in general not determined by their loop molecules only, see for instance <cit.>. Interestingly, in the smooth case, the fibrewise classification turns out to be different also in the case of focus-focus singularities
(having multiple points on the same singular fibre), see Bolsinov and Izosimov <cit.>.
The fibres of hypersemitoric systems will be classified by means of a `labeled graph' in the forthcoming paper by Gullentops and Hohloch <cit.> which extends the special case of hyperbolic-regular fibres studied in Gullentops' thesis <cit.>.
§.§ Examples
Hypersemitoric systems were first defined in Hohloch and Palmer <cit.> who gave several examples for this class of systems. There are more examples in the paper by Gullentops and Hohloch <cit.> and Gullentops' thesis <cit.>.
§.§.§ Hypersemitoric coupled angular momenta system
Let J and H be as in the (semitoric) coupled angular momenta system, as discussed in Section <ref>. We will now modify H, such that we instead consider the following:
H̃(x_1,y_1,z_1,x_2,y_2,z_2) := H(x_1,y_1,z_1,x_2,y_2,z_2) + sz_1^2,
with parameter s ∈. Then, it turns out that the image of the momentum map F̃ = (J,H̃), when the coupling parameter is t = 0.5 for which we always have a focus-focus value in the semitoric case (i.e. s = 0), we can generate flaps and pleats, see Figure <ref>. It turns out that the point p_1 = (0,0,1,0,0,-1) is of focus-focus type if s_p_1^- < s < s_p_1^+, where
s_p_1^± = R_1± 2 √(R_1 R_2)/4R_2.
If s < s_p_1^- or s > s_p_1^+, then p_1 is of elliptic-elliptic type. Numerics indicates that, if R_1 < R_2, for s < s_p_1^- a flap appears, and for some s > s_p_1^+, then a pleat appears. If s ∈{s_p_1^-,s_p_1^+}, then (0,0,1,0,0,-1) is a degenerate singularity. This can be shown by a similar procedure as in Le Floch and Pelayo <cit.>. Furthermore, the point p_2 = (0,0,-1,0,0,1) is a focus-focus point if s_p_2^- < s < s_p_2^+, where
s_p_2^± = R_1± 2 √(R_1 R_2) + 2R_2/4R_2.
When s < s_p_2^-, then F̃(p_2) is an elliptic-elliptic value on the boundary of the momentum map image. For some s > s_p_2^+ we have that F̃(p_2) is an elliptic-elliptic value which joins the pleat created by p_1, see Figure <ref>.
§.§.§ The hypersemitoric octagon system
A specific family of examples can be created by taking the toric octagon system constructed in De Meulenaere and Hohloch <cit.> and, instead of perturbing it only to a semitoric system (cf. Section <ref>), more perturbation terms can be added to obtain a family of hypersemitoric systems. To be more precise, let F=(J, H) be as in Section <ref> and modify H to H_t with t = (t_1, t_2, t_3, t_4) ∈^4 via setting
H_t := (1 - 2t_1)H + ∑_i=1^4 t_iγ_i,
with
γ_1([z]) := 1/50( z̅_2z̅_3z̅_4z_6z_7z_8),
γ_2([z]) := 1/50z_5^4z_4^4,
γ_3([z]) := 1/50z_4^4z_7^4,
γ_3([z]) := 1/50z_5^4z_7^4.
Gullentops and Hohloch <cit.> proved the appearance of flaps and pleats/swallowtails and their collisions for certain values of the parameter t, see for example Figure <ref>. Moreover, they studied the shape and topology for hyperbolic-regular fibres in the system (J, H_t) and showed that, for fibres over a hyperbolic-regular value, not only double tori (`two tori stacked on top of each other' resp. a figure eight loop times S^1) are possible, but that the number of `tori stacked on top of each other' possibly appearing as fibre of a hyperbolic-regular value is bounded from above by 13.
|
http://arxiv.org/abs/2307.03999v1 | 20230708154620 | Transport properties in gapped graphene through magnetic barrier in a laser field | [
"Rachid El Aitouni",
"Miloud Mekkaoui",
"Ahmed Jellal",
"Michael Schreiber"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
Laboratory of Theoretical Physics, Faculty of Sciences, Chouaïb Doukkali University, PO Box 20, 24000 El Jadida, Morocco
Laboratory of Theoretical Physics, Faculty of Sciences, Chouaïb Doukkali University, PO Box 20, 24000 El Jadida, Morocco
[email protected]
Laboratory of Theoretical Physics, Faculty of Sciences, Chouaïb Doukkali University, PO Box 20, 24000 El Jadida, Morocco
Canadian Quantum Research Center,
204-3002 32 Ave Vernon, BC V1T 2L7, Canada
Institut für Physik, Technische Universität, D-09107 Chemnitz, Germany
We study the transport properties of Dirac fermions through gapped graphene through a magnetic barrier irradiated by a laser field oscillating in time. We use Floquet theory and the solution of Weber's differential equation to determine the energy spectrum corresponding to the three regions composing the system. The boundary conditions and the transfer matrix approach are employed to explicitly determine the transmission probabilities for multi-energy bands and the associated conductance. As an illustration, we focus only on the three first bands: the central band T_0 (zero photon exchange) and the two first side bands T_±1 (photon emission or absorption). It is found that the laser field activates the process of translation through photon exchange. Furthermore, we show that varying the incident angle and energy gap strongly affects the transmission process. The conductance increases when the number of electrons that cross the barrier increases, namely when there is a significant transmission.
78.67.Wj, 05.40.-a, 05.60.-k, 72.80.Vp
Keywords: Graphene, laser field, magnetic field, energy gap, transmission, Klein effect, conductance.
Transport properties in gapped graphene through magnetic barrier in a laser field
Michael Schreiber
August 12, 2023
===================================================================================
§ INTRODUCTION
Graphene is a two-dimensional carbon-based material that is one atom thick, and has atoms structured in a hexagonal shape like a honeycomb <cit.>. Graphene has incredible properties such as a very high mobility <cit.>, electrons moving with a speed 300 times lower than the speed of light, a good conductivity (minimal in the vicinity of the Dirac points, i.e., always the fermions pass), being flexible <cit.> and being very hard <cit.>.
Due to these properties, graphene is becoming the most used material in the technological industries <cit.>.
It is theoretically studied in the framework of the tight-binding model <cit.> and as a result, the energy spectrum shows a linear dispersion relation. In addition, the energy bands are in contact at six points <cit.>, called Dirac points K (K'), and form cones around them. It is surprising that electrons can pass from the valance band to the conduction band easily without any effect. This lack of excitation energy constitutes, in fact, an obstacle and a challenge for the fabrication of devices based on graphene. Consequently, to control the passage of electrons, an energy gap should be created between the two bands. Several studies have been reported on the subject to overcome such situations, for instance, either by deforming graphene to generate pseudo-magnetic fields that play the role of a real magnetic field <cit.> or by stacking one layer of graphene on the other <cit.>.
On the other hand, fermions confined in graphene under barriers, at normal incidence, can cross them even if their energy is less than the barrier heights, an effect known as the Klein paradox <cit.>.
For an oscillating potential over time, the energy spectrum acquires sub-bands, generating several transmission modes, and each mode corresponds to an energy band <cit.>.
Furthermore, an applied magnetic field to graphene generates a quantized energy spectrum known as Landau levels <cit.>. Combining these with the oscillating potential gives rise to a current density in x- and y-directions <cit.>. When the graphene is irradiated by a time-varying laser field,
subbands emerge in the energy spectrum, and then the barrier exchanges photons with the fermions, generating infinite transmission modes <cit.>. As a consequence, the laser field suppresses the Klein effect, which makes it possible to control the passage of fermions.
We investigate how Dirac fermions can cross a gapped graphene subjected to a magnetic barrier and irradiated by a laser field. Within the framework of Floquet theory <cit.> and by using the solution of Weber's differential equation <cit.>, we will be able to determine the eigenspinors corresponding to each region composing the system. These will be matched at boundaries and mapped in matrix form by applying the matrix transfer approach to finally get the transmission coefficients for all energy bands. Now, with the help of the current density, we derive the transmission probabilities for all modes.
The conductance is also calculated by integrating the total transmission over all incident angles.
Since it is not easy to treat all modes numerically, we limit our study to the first three bands, which are the central band (l=0) and the two first side bands (l =±1). We show that increasing the barrier width, or the incidence energy, decreases the transmissions, which implies that the number of electrons that cross the barrier decreases, consequently, the conductance decreases. On the other hand, when the intensity of the laser field increases, we observe that the transmissions decrease, but they increase as long as its frequency increases. When the barrier width increases, it is found that the resonance peaks appear, and their number increases. Another set of results shows that the transmissions are almost zero when the incidence energy is less than the energy gap, and the Klein paradox is still present.
This paper is organized as follows. In Sec. <ref>, we present the Hamiltonian describing our system and we will solve the eigenvalue equations to determine the wave functions in the three regions. We use the boundary conditions and the matrix formalism to express the transmission probabilities of each band, and we calculate the integral of this total transmission which makes it possible to determine the conductance at zero temperature in Sec. <ref>. We discuss our numerical results in Sec. <ref>. Finally, we conclude our work.
§ THEORETICAL MODEL
We study the behavior of Dirac fermions in a graphene sheet divided into three regions. Regions 1 and 3 contain only pristine graphene, whereas the gapped region 2 of width d is subjected to a perpendicular magnetic field and irradiated by a laser field, as shown in Fig. <ref>.
The present system can be described the following Hamiltonian
H= v_F σ⃗·[p⃗-e/c(A⃗_L(t)+A⃗_B(x))]+Δσ_z
where σ_x,y,z are Pauli matrices, v_F≈ c/300 is the Fermi velocity , p⃗=-iħ(∂/∂ x,∂/∂ y) the momentum operator, e the electronic
charge. The vector potential
A⃗_L(t) of the laser field in the dipole approximation <cit.> is generated by an electric field of amplitude F and frequency ω defined as E(t)=Fsin(ω t), which is given by
A⃗_L(x,y,t)=(0,A_0cos(ω t),0)
with the laser field amplitude A_0=F/ω. For the magnetic field, the vector potential A⃗_B( x) is chosen in the Landau gauge B(0,x,0) and the continuity allows us to write
A⃗_B(x)= {[ 0, x<0; Bx, 0<x<d; Bd, x>d. ].
To determine the eigenspinors Ψ(x,y,t)=(Ψ_1, Ψ_2)^T in the three regions, we solve the eigenvalue equation, with T standing for transpose. In region 2 (0<x<d), we get
ΔΨ_1(x,y,t) + v_F[p_x-i(p_y-eF/ωcos(ω t)-eBx)]Ψ_2(x,y,t)=iħ∂/∂ tΨ_1(x,y,t)
v_F[p_x+i(p_y-eF/ωcos(ω t)-eBx)]Ψ_1(x,y,t)-ΔΨ_2(x,y,t)=iħ∂/∂ tΨ_2(x,y,t)
To proceed further, note that in the framework of the Floquet approximation <cit.>, the oscillation of the laser field over time produces several energy modes in the eigenspinors. As a result, we have
Ψ(x,y,t)=ψ(x,y,t)e^-iEt/ħ
where E is the Floquet quasi-energy, ψ(x,y,t) is a time periodic function satisfying ψ(x,y,t+t_0)=ψ(x,y,t) and t_0 is the time period of the laser field. On the other hand, if the Hamiltonian is invariant along the y-direction, then we write Ψ(x,y,t)=e^ik_yye^-iEt/ħφ(t)(ϕ_1(x),ϕ_2(x))^T, and therefore (<ref>,<ref>) become
v_F[-i∂/∂ x-i(k_y-F/ωcos(ω t)-Bx)]ϕ_2(x)φ(t)e^ik_yye^-iEt = (i∂/∂ t-Δ)ϕ_1(x)φ(t)e^ik_yye^-iEt
v_F[-i∂/∂ x+i(k_y-F/ωcos(ω t)-Bx)]ϕ_1(x)φ(t)e^ik_yye^-iEt = (i∂/∂ t+Δ)ϕ_2(x)φ(t)e^ik_yye^-iEt
in the system unit (ħ=e=c=1). It is straightforward to find
-iF/ωcos(ω t)=∂/∂ tφ(t)
and therefore the temporal component is
φ(t)=e^-iαsin(ω t).
Now, we use the Jacobi–Anger identity e^-iαsin(ω t)=∑_-∞^+∞J_m(α)e^-imω t to write (<ref>,<ref>) as
∂ϕ_2(x)/∂ x-[x/ℓ_B^2-k_y+mϖ]ϕ_2(x)-i (ε+mϖ-δ)ϕ_1(x)=0
∂ϕ_1(x)/∂ x+[x/ℓ_B^2-k_y+mϖ]ϕ_1(x)-i (ε+mϖ+δ)ϕ_2(x)=0
where ℓ_B=1/√(B), ϖ=ω/v_F, F̃=F/v_F, ε=E/v_F and δ=Δ/v_F. From
(<ref>,<ref>), we obtain two new decoupled equations
∂^2ϕ_1(x)/∂ ^2 x+[1/ℓ_B^2-(x/ℓ_B^2-k_y+mϖ)^2+(ε+mϖ)^2-δ^2]ϕ_1(x) = 0
∂^2ϕ_2(x)/∂ ^2 x+[-1/ℓ_B^2-(x/ℓ_B^2-k_y+mϖ)^2+(ε+mϖ)^2+δ^2]ϕ_1(x) = 0.
These can be expressed in terms of the Weber differential equations <cit.> by making the change of variable X_m=√(2)(x/ℓ_B-k_yℓ_B+mϖℓ_B) and setting v_m=(εℓ_B+mϖℓ_B)^2-(δℓ_B)^2/2, to get
d^2ϕ_1,2(X_m)/dX_m^2+[±1/2-X^2_m/4 +v_m]ϕ_1,2(X_m)=0
having the following solutions
ϕ_1(X_m) = A_mD_v_m(X_m)+B_mD_v_m(-X_m)
ϕ_2(X_m) = -i√(2 )/εℓ_B+mϖℓ_B+δℓ_B[ A_mD_v_m+1(X_m)-B_mD_v_m+1(-X_m)]
where A_m, B_m are constant coefficients corresponding to mth side-band, and D_v_m is the parabolic cylinder function. Consequently, the eigenspinors in region 2 take the form
Ψ_2(x,y,t)=e^ik_yy∑_l=-∞^+∞[A_l[ Ξ^+_l(x); η^+_l(x) ]
+B_l[ Ξ^-_l(x); η^-_l(x) ]]∑_m=-∞^+∞J_m(α)e^-i(ε+(l+m)ω)t
and we have defined
Ξ^±(x) = D_v_m(± X_m)
η^±(x) = ∓i√(2)/εℓ_B+m ϖℓ_B+δℓ_B D_v_m+1(± X_m).
In the region 1 (x<0) we have only pristine graphene, and then we can easily obtain the associated eigenspinors and eigenvalues <cit.>
Ψ_1(x,y,t)=e^ik_yy∑_m=-∞^+∞[δ_l,0[ 1; Λ_l ]e^ik_lx+∑_m,l=-∞^+∞r_l[ 1; -Λ^*_l ]e^-ik_lx]δ_m,le^-iv_F( ε+mϖ)t
ε+lϖ=s_l√(k^2_l+k^2_y)
where r_l is the amplitude of the reflected wave corresponding to band l, δ_m,l=J_m-l(α=0), s_l=sgn(v_Fε+lv_Fϖ),
ϕ_l=tan^-1k_y/k_l,
k_l=εcosϕ_l,
k_y=εsinϕ_l and
Λ_l=s_lk_l+ik_y/√(k^2_l+k^2_y)=s_le^iϕ_l.
We can establish
the relation between the incident angles
ϕ_l=arcsin(ε/ε+lϖsin(ϕ_0)).
In region 3 (x>d), the emergent angle ϕ'_l is different than the incident one ϕ_0 because of the continuity of the vector potential. The solution is <cit.>
Ψ_3(x,y,t)=e^ik_yy∑_m,l=-∞^+∞[t_l[ 1; Λ'_l ]e^ik'_lx+b_l[ 1; -Λ'^*_l ]e^-ik'_lx]δ_m,le^-iv_F(ε+mϖ)t
ε+lϖ =s_l√(k_l^'2+(k_y- d/ℓ_B^2)^2)
where t_l is the transmission amplitude of the transmitted wave corresponding to the band l, b_l is a null vector,
ϕ'_l=tan^-1ky- d/ℓ_B^2/k'_l,
k'_l=(ε+lϖ)cosϕ'_l,
k_y=(ε+lϖ)sinϕ'_l+d/ℓ_B^2
and
Λ'_l=s_lk'_l+i(k_y-d/ℓ_B^2)/√(k_l^'2+(k_y-d/ℓ_B^2)^2)=s_le^iϕ'_l.
From the conservation of the momentum k_y, we get the relation
ϕ'_l=arcsin(ε/ε+l ϖsinϕ_0- d/ℓ_B^2/ε+lϖ).
As we will see, the above results can be used to study the transport properties of gapped graphene scattered by a magnetic barrier and irradiated by a laser field. We obtain the transmissions associated with several energy bands and the corresponding conductance.
§ TRANSMISSION PROBABILITIES
We use the continuity of the eigenspinors at x=0 and x =d to
determine the transmission probabilities for the present system. This corresponds to the processes
Ψ_1(0,y,t)=Ψ_2(0,y,t) and Ψ_2(d,y,t)=Ψ_3(d,y,t),
which yields
δ_m,0+r_m=∑_l=-∞^+∞(A_lΞ^+_l(0)+B_lΞ^-_l(0))J_m-l(α)
δ_m,0Λ_m-r_mΛ_m^*=∑_l=-∞^+∞(A_lη^+_l(0)+B_lη^-_l(0))J_m-l(α)
t_me^ik'_md+b_me^-ik'_md=∑_l=-∞^+∞(A_lΞ^+_l(d)+B_lΞ^-_l(d))J_m-l(α)
t_mΛ^'_me^ik'_md-b_mΛ_m^'*e^-ik'_md=∑_l=-∞^+∞(A_lη^+_l(d)+B_lη^-_l(d))J_m-l(α).
We have four equations, but each one has an infinite number of modes, and to solve the problem, we use the transfer matrix approach. As a result, we get
[ Υ_1; Υ'_1 ]
=[ ℕ_1,1 ℕ_1,2; ℕ_2,1 ℕ_2,2 ][ Υ_2; Υ'_2 ]=ℕ[ Υ_2; Υ'_2 ]
with
ℕ=[ 𝕀 𝕀; Γ^+ Γ^-; ]^-1[ 𝕏^+_0 𝕏^-_0; ℝ^+_0 ℝ^-_0 ][ 𝕏^+_d 𝕏^-_d; ℝ^+_d ℝ^-_d ]^-1[ 𝕀 𝕀; Γ'^+ Γ'^-; ][ 𝕂^+ 𝕆; 𝕆 𝕂^-; ]
and
Γ^±=±δ_m,lΛ_l^±1, Γ'^±=±δ_m,lΛ_l^'±1, 𝕏^±_z=Ξ_l^±(z)J_m-l(α), ℝ^±_z=η_l^±(z)J_m-l(α), 𝕂^±=e^± ik'_lLδ_m,l
where 𝕆 is the zero matrix, 𝕀 is the unit matrix and z={0,d}.
In this case, we take into account Dirac fermions traveling from left to right with energy E, and from (<ref>), we obtain
Υ_2=ℕ^-1_1,1Υ_1
with the Kronecker coefficient δ_0,l=Υ_1 and Υ_2=t_l.
Because n and l range from -∞ to +∞ and are challenging to solve, the aforementioned transfer matrix is of infinite order. Due to this, we replace the infinite series with a finite set of terms ranging from -N to N, provided that N≥F/ω^2 <cit.>, resulting in
t_-N+k=ℕ'[k+1,N+1]
where ℕ'=ℕ^-1_11, k=0, 1, 2,⋯ N.
To simplify, we limit our studies only to the central band and the first two side bands l=0,± 1 of energy E± hω having the following transmission coefficients
t_-1=ℕ'[1,2],
t_0=ℕ'[2,2],
t_1=ℕ'[3,2].
On the other hand, the current density is determined from the continuity equation, its expression given by J=e v_F Ψ^*σ_xΨ , therefore the expression of the incident, reflected and transmitted current density given by
J_inc,0=ev_F(Λ_0+Λ^*_0)
J_tra,l=ev_Ft^*_lt_l(Λ'_l+Λ'^*_l)
J_ref,l=ev_Fr^*_lr_l(Λ_l+Λ^*_l)
The relation between the current density and the transmission probability is expressed as T_l=J_tra,l/J_inc,0. Then, after some algebra, we get
T_l=cosϕ'_l/cosϕ_0|t_l|^2
and the total transmission probability is given by summing up over all modes
T=∑_lT_l.
By definition, the conductance at zero temperature is the average of the flux of the fermions on the half Fermi surface <cit.>, on the other hand it is the integration of the total transmission T over k_y <cit.>, given by
G=G_0/2π∫_-k_y^max^k_y^maxT dk_y
where G_0 is the conductance unit.
Using the relation between transverse wave vector k_y and the incident angle ϕ_0 to express G as
G=G_0/2π∫_-π/2^π/2T cosϕ_0dϕ_0.
To investigate and underline the basic features of the present system, we numerically analyze the transport properties based on the transmission channels and associated conductance in the following chapter.
§ RESULTS AND DISCUSSION
We numerically study the transmission probabilities of Dirac fermions in gapped graphene through a magnetic barrier in a laser field. Recall that the oscillation of the barrier over time generates several energy bands, which give rise to transmission channels. Due to the difficulty of analyzing all modes, we will limit ourselves to the first three bands, where the central band T_0 corresponds to zero photon exchange and the first two side bands T_±1 to absorption or emission of photons.
Fig. <ref> shows the transmission probability as a function of the energy εℓ_B for different incident angles. There is transmission if the condition ε >d/ℓ_B^2-l ϖ/1+sinϕ_0
is satisfied, in other words, this quantity plays the role of an effective mass <cit.>. For normal incidence, as depicted in Fig. <ref>, transmission is zero for ε<δ. Due to this condition, resonance peaks appear with decreasing amplitudes along the εℓ_B-axis, that is to say the disappearance of the Fabry-Pérot resonance, which is in agreement with previous results <cit.>. The transmission process with zero photon exchange, T_0, is dominating, and therefore, the majority of the electrons cross the barrier without photon exchange.
Fig. <ref> shows the behavior of T_0 for different incident angles. As a result, in Fig. <ref> it increases
sharply away from the normal incidence. On the other hand, transmission with photon exchange as shown in Figs. <ref>, <ref> there is a decrease for large energy.
We can conclude that the behavior of T_0 changes if we move away from the normal incidence and that
the photon exchange process is suppressed.
Fig. <ref> displays the transmission probability as a function of εℓ_B under a suitable choice of physical parameters. Transmissions appear when condition ε >δ is satisfied. As clearly seen in Fig. <ref>, we observe the dominance of T_0 compared to those corresponding to the first two side bands, and it is almost equal to the total transmission as
found
in <cit.>. Now for different values of F̃ℓ_B^2, we plot T_0 in Fig. <ref>. We see that T_0 decreases with the increase of F̃ℓ_B^2, because the increase in laser field suppresses T_0 as we have already seen <cit.>.
Fig. <ref> displays the effect of field frequency on transmission: increasing the frequency increases T_0.
Fig. <ref> is drawn for different values of barrier width d/ℓ_B. If this increases, resonance peaks appear and their number increases, and the
oscillations get closer. A similar result is obtained in our previous work <cit.>.
Fig. <ref> presents the transmission probabilities as a function of the energy gap δℓ_B.
We show in Fig. <ref> the total transmission probability (magenta line) and those with or without photon exchange.
We distinguish two interesting cases: first, for δℓ_B<6, the Klein effect is very clear and transmission with photon exchange is almost zero, that means that the majority of electrons cross the barrier without photon exchange. Second, for δℓ_B > 6, the transmissions decrease in an oscillatory way until they become zero when δℓ_B is close to εℓ_B=15.
Fig. <ref> displays the total transmission for different values of F̃ℓ_B, and we see that the increase of F̃ℓ_B suppresses the transmission, as has been found in <cit.>. The Klein effect is clear for very small values of F̃ℓ_B and δℓ_B. For F̃ℓ_B=0.3, the Klein effect is observed only for δℓ_B<6, then the transmission decreases in an oscillatory way until the oscillations vanish. If we increase F̃ℓ_B the transmission keeps the same shape with decreasing amplitude, which is in agreement with the results of <cit.>.
Fig. <ref> is similar to the previous one, but here we vary ϖℓ_B. As a result, for ϖℓ_B=1 the Klein effect always exists up to ϖℓ_B=5, then the transmission decreases in an oscillatory way towards zero near εℓ_B. On the other hand, there will be total reflection if the incident energy is lower than the energy gap.
If the frequency decreases, the transmission retains the same shape, but the amplitude decreases. Fig. <ref> shows the effect of the barrier width on the total transmission. We observe that resonance peaks appear when the width increases. For very small widths, the Klein effect is found up to δℓ_B ≈ 6, and then the transmission decreases towards zero. Increasing the width increases the number of oscillations and their amplitudes, as already seen in <cit.>. We summarize that increasing the amplitude of the field suppresses transmission inside the barrier. On the other hand, increasing the frequency increases the transmission, and increasing the width increases the number of oscillations and their amplitude.
Fig. <ref> shows the transmission probabilities as a function of the barrier width d/ℓ_B. In Fig. <ref> we observe that all the transmissions have sinusoidal behavior. The total transmission oscillates in the vicinity of one (Klein paradox). T_0 is predominant and its oscillation amplitude decreases when the width increases. The transmissions with photon exchange also oscillate, but with phase shift, which increases along the d/ℓ_B-axis. For certain values of d/ℓ_B, the transmissions with or without photon exchange are equal.
Fig. <ref> displays transmission with photon emission for different values of the transverse wave vector k_yℓ_B. There is always a sinusoidal behavior with increasing amplitude along the d/ℓ_B-axis. When k_yℓ_B increases, the width of the oscillations decreases.
In Fig. <ref>, we show the effect of the laser field frequency on transmission. We notice that the amplitude and period of oscillations decrease as the frequency increases. Thus, the increase in frequency suppresses the transmissions with photon exchanges.
We vary the intensity of the laser field F̃ℓ_B^2 in Fig. <ref> and observe that the transmission is oscillating with the same period. We notice that the increase in F̃ℓ_B^2 causes an increase in transmission with photon exchange and decreases that of the central band.
In Fig. <ref>, we plot the conductance as a function of the energy εℓ_B. Choosing different values of width d/ℓ_B, Fig. <ref> reveals that the conductance varies almost exponentially for lower values of d/ℓ_B, and oscillates when d/ℓ_B increases.
Fig. <ref> shows the effect of intensity F̃ℓ_B^2 of the laser field on conductance. We observe that conductance increases as F̃ℓ_B^2 increases, but it vanishes when ε→δ.
Fig. <ref> is plotted for different values of frequency ϖℓ_B. We notice that the conductance tends to zero when εℓ_B is close to δℓ_B and the oscillations increase as ϖℓ_B increases.
In Fig. <ref>, we vary δℓ_B to observe that the conductance is always almost zero when ε tends towards δ.
Finally, to increase the conductance, it is necessary to increase the number of electrons crossing the barrier, thereby increasing the transmission. As we have seen, the transmission increases when the incident energy increases or the barrier width decreases, as well as when the intensity of the laser field decreases or its frequency increases.
In Figure <ref>, the conductance is represented as a function of the energy gap δℓ_B. By choosing three values of incident energy in Fig. <ref>, we show that the conductance is maximum at the beginning, then decreases in an oscillatory way towards zero near the value δ =ε. The amplitude increases when incident energy increases as well, exhibiting a behavior similar to transmission as we have seen before.
Fig. <ref> shows the effect of width d/ℓ_B on the conductance. There are always resonance peaks that appear around δℓ_B=3, the number of oscillations increases with the increase of d/ℓ_B. In Figs. <ref> and <ref>, we visualize the effect of the laser field parameters on the conductance. They show that the amplitude of the conductance increases with the increase in frequency, and decreases when the amplitude increases.
§ CONCLUSION
We studied the effect of a gapped magnetic barrier irradiated by a laser field generated by an electric field of amplitude F and frequency ω on Dirac fermions in graphene. We started with the solution of the eigenvalue equations to determine the spinors in the three regions of the gapped sheet. We used the Floquet theory, and the solution of Weber's differential equation to determine the eigenspinors corresponding to each region as combinations of parabolic cylindrical functions. Then we employed the boundary conditions, which give four equations, each equation has infinite modes. To solve them, we used the transfer matrix approach to obtain a matrix of infinite order that is difficult to solve. For simplicity, we focused only on the three first bands, the central band corresponds to l=0 and the two first side bands correspond to l=±1. Lastly, we calculated the integral of the total transmission probability to obtain conduction at zero temperature.
When a barrier oscillates in time, it generates several energy bands, namely the photon exchange between the barrier and the Dirac fermions. Here we found that the transmission process with zero photon exchange is much more important than the process with photon exchange. Klein's paradox is still present, but we can suppress it. As we know, the original Klein effect is only observed for normal incidences (ϕ_0=0), but in this work, this effect is observed for non-normal incidences. When the barrier width is increased, the transmission decreases until it disappears for a critical width, the same thing happens for the conductance. On the other hand, the transmission increases when the incident energy increases. However, to have transmission, it is necessary to satisfy the condition that binds the incident energy to the other barrier parameters: ε >d/ℓ_B^2-l ϖ/1+sinϕ_0. As we know the conductance exists if we have a non-zero transmission, which always implies the verification of this last condition..
9
Novoselov2004
K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva, and A. A. Firsov, Science 306, 666 (2004).
Novoselov2005
K. S. Novoselov, A. K. Geim, S. V. Morozov, D. Jiang, M. I. Katsnelson, I. V. Grigorieva, S. V. Dubonos, and A. A. Firsov, Nature 438, 197 (2005).
mobil2
S. Morozov, K. Novoselov, M. Katsnelson, F. Schedin, D. Elias, J. Jaszczak, and A. Geim, Phys. Rev. Lett. 100, 016602
(2008).
mobil
K. I. Bolotin, K. J. Sikes, Z. Jiang, M. Klima, G. Fudenberg, J. Hone, P. Kim, and H. L. Stormer, Solid State Commun.
146, 351 (2008).
flix
C. Lee, X. Wei, J. W. Kysar, and J. Hone, Science 321, 385 (2008).
Beenakker2008
C. W. Beenakker, Rev. Mod. Phys. 80, 1337 (2008).
Bhattacharjee2006
S. Bhattacharjee and K. Sengupta, Phys. Rev. Lett. 97, 217001 (2006).
Bunch2005
J. S. Bunch, Y. Yaish, M. Brink, K. Bolotin, and P. L. McEuen,
Nano Lett. 5, 2887 (2005).
Berger2004
C. Berger, Z. M. Song, T. B. Li, X. B. Li, A. Y. Ogbazghi, R. Feng,
Z. T. Dai, A. N. Marchenkov, E. H. Conrad, P. N. First, and W. A. de Heer, J.
Phys. Chem. B 108, 19912 (2004).
Tight
S. Reich, J. Maultzsch, C. Thomsen, and P. Ordejon, Phys. Rev. B 66, 035412 (2002).
Castro2009
A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009).
propr
N. M. R. Peres, J. Phys.: Condens. Matter 21, 323201 (2009).
def1
F. Guinea, M. I. Katsnelson, and A. K. Geim, Nat. Phys. 6, 30 (2010).
def4
G.-X. Ni, Y. Zheng, S. Bae, H. R. Kim, A. Pachoud, Y. S. Kim, C.-L. Tan, D. Im, J.-H. Ahn, B. H. Hong, and B. Ozyilmaz, ACS Nano 6, 1158 (2012).
scatring
S. Latil and L. Henrard, Phys. Rev. Lett. 97, 036803 (2006).
Morozov2005
S. V. Morozov, K. S. Novoselov, F. Schedin, D. Jiang, A. A. Firsov, and A. K. Geim, Phys. Rev. B 72, 201401 (2005).
klien2
M. I. Katsnelson, K. S. Novoselov, and A. K. Geim, Nat. Phys. 2, 620 (2006).
jellal2014
A. Jellal, M. Mekkaoui, E. B. Choubabi, and H. Bahlouli, Eur. Phys. J. B 87, 123 (2014).
conmagnetic
A. De Martino, L. Dell’Anna, and R. Egger, Phys. Rev. Lett. 98, 066802 (2007).
Landau
F. Xu and L. Zhang, Chin. Phys. B 28, 117403 (2019).
Magnetic2011
M. O. Goerbig, Rev. Mod. Phys. 83, 1193 (2011).
confinementmagnetic
N. Myoung and G. Ihm, Physica E 42, 70 (2009).
Elaitouni2022
R. El Aitouni and A. Jellal, Phys. Lett. A 447, 128288 (2022).
biswas2013
R. Biswas and C. Sinha, Appl. Phys. 114, 183706 (2013).
biswas2012
C. Sinha and R. Biswas, Appl. Phys. Lett. 100, 183107 (2012).
laser2
M. Ahsan Zeb, K. Sabeeh, and M. Tahir, Phys. Rev. B 78, 165420 (2008).
rachid2022
R. El Aitouni, M. Mekkaoui, A. Jellal, Ann. Phys. (Berlin) 535, 2200630 (2023).
floquetappr
Z. Gu, H. A. Fertig, D. P. Arovas, and A. Auerbach, Phys. Rev. Lett. 107, 216601 (2011).
grad
I. S. Gradshteyn, I. M. Ryzhik, Table of Integrals, Series, and Products (Academic Press, Inc. New York, 1980).
approx
R. Loudon, The Quantum Theory of Light (3rd ed, Oxford University Press, New York, 2000).
math
F. W. J. Olver, J. Res. Nat. Bur. Standards Sect. B 63, 131 (1959).
conduct1
X. Chen and J. W. Tao, Appl. Phys. Lett. 94, 262102 (2009).
conduct2
M. R. Masir, P. Vasilopoulos, and F. M. Peeters, Phys.
Rev. B 79, 035409 (2009).
Biswas2021
R. Biswas and C. Sinha, Sci. Rep. 11, 2881 (2021).
biswas2016
R. Biswas, S. Maitty, and C. Sinha, Physica E. 84, 235 (2016).
Mekkoui2021
M. Mekkaoui, A. Jellal, and H. Bahlouli, Solid State Communi.
358, 114981 (2022).
Sergy2011
S. E. Savel’ev and A. S. Alexandrov, Phys. Rev. B 84, 035428 (2011).
MEKKAOUI2018
M. Mekkaoui, R. El Kinani, and A. Jellal, Mater. Res. Expr. 6, 085013 (2019).
Makkoui2015
H. Chnafa, M. Mekkaoui, A. Jellal, and A. Bahaoui, Physica E 148, 115645 (2023).
|
http://arxiv.org/abs/2307.03979v1 | 20230708140755 | Attacking (EC)DSA scheme with ephemeral keys sharing specific bits | [
"M. Adamoudis",
"K. A. Draziotis",
"D. Poulakis"
] | cs.CR | [
"cs.CR",
"94A60"
] |
Computation of the private key]Attacking (EC)DSA scheme with ephemeral keys sharing specific bits
[2010]94A60.
[
Sahil Gangurde
ABV-Indian Institute of Information Technology & Management, Gwalior, India
[email protected]
===========================================================================================================================
In this paper, we present a deterministic attack
on (EC)DSA signature scheme, providing
that several signatures are known such that the corresponding
ephemeral keys share a certain amount of bits without knowing
their value. By eliminating the shared blocks of bits between
the ephemeral keys, we get a lattice of dimension equal to the number of signatures having a vector containing the private key. We compute
an upper bound for the distance of this vector from a target vector, and next,
using Kannan's enumeration algorithm, we determine it and hence the secret key.
The attack can be made highly efficient by appropriately selecting
the number of shared bits and the number of signatures.
§ INTRODUCTION - STATEMENT OF RESULTS
In August 1991, the U.S. government's National Institute of
Standards and Technology (NIST) proposed an algorithm for digital
signatures. The algorithm is known as DSA, for Digital Signature
Algorithm <cit.>. It is an efficient
variant of the ElGamal digital signature scheme <cit.>
intended for use in electronic mail, electronic funds transfer,
electronic data interchange, software distribution, data storage,
and other applications which require data integrity assurance and
data authentication. In 1998, an elliptic curve analogue called
Elliptic Curve Digital Signature Algorithm (ECDSA) was proposed
and standardized <cit.>.
§.§ The (EC)DSA Signature Scheme
First, we recall the DSA schemes. The
signer selects a prime p of size between 1024 and 3072 bits with
increments of 1024, as recommended in FIPS 186-3 <cit.>.
Also, he selects a prime q of size 160, 224 or 256 bits, with q|p-1
and a generator g of the
unique order q subgroup G of the multiplicative group 𝔽_p^*
of the prime finite field 𝔽_p. Furthermore, he
selects a randomly a ∈{1,…,q-1} and computes R = g^a p.
The public key of the signer is (p,q,g,R) and his private
key a. He also publishes a hash function
h : {0,1}^* →{0,…,q-1}.
To sign a message m∈{0,1}^*, he selects randomly
k ∈{1,…,q-1} which is the ephemeral key, and
computes
r = (g^k p) q and
s = k^-1(h(m)+ar) q.
The signature of m is (r,s). The signature is accepted as
valid if and only if the following holds:
r = ((g^s^-1h(m) qR^s^-1r q) p) q.
Next, let us recall the ECDSA scheme. The signer selects an elliptic curve
E over 𝔽_p, a point P∈ E(𝔽_p) with order a prime
q of size at least 160 bits.
Following FIPS 186-3, the prime p must belongs to the set
{160,224,256,512}. Further, he chooses randomly
a ∈{1,…,q-1} and computes Q = aP.
Finally, he publishes a hash
function h : {0,1}^* →{0,…,q-1}.
The public
key of the signer is (E,p,q,P,Q) and his private key a.
To sign a message m, he selects randomly
k ∈{1,…,q-1} which is the ephemeral key and computes
kP = (x,y) (where x and y are regarded as integers between 0 and
p-1).
He computes
r = x q and
s = k^-1(h(m)+ar) q.
The signature of m is (r,s). The verifier computes
u_1 = s^-1h(m) q, u_2 = s^-1r q, u_1P+u_2Q = (x_0,y_0).
He accepts the signature if and only if r = x_0 q.
§.§ Previous Results
Researchers have explored various attacks on DSA schemes by analyzing the signature equation s= k^-1(h(m)+ar) mod q and using lattice reduction techniques such as LLL and CVP algorithms. One study focused on the use of a linear congruential pseudorandom number generator (LCG) for generating random numbers in DSA <cit.>, showing that combining the DSA signature equations with LCG generation equations can lead to a system of equations that provide the secret key. To recover the secret key, several heuristic attacks have been proposed <cit.> in another study, which assume the revelation of a small fraction of the corresponding nonce k. However, these attacks are based on heuristic assumptions, making it difficult to make precise statements on their theoretical behavior.
The first rigorous lattice attack on (EC)DSA was presented in <cit.>. The authors successfully decreased the security of (EC)DSA to a Hidden Number Problem (HNP), which can then be further reduced to an approximation Closest Vector Problem (CVP) for a specific lattice. The signer's secret key a can be computed using this reduction in polynomial time. The attack was also adapted to the case of ECDSA, as described in <cit.>.
The paper <cit.> describes an attack on DSA schemes that uses the LLL reduction method and requires one message. By computing two short vectors of a three-dimensional lattice, the attack derives two intersecting lines in (a, k), provided that a and k are sufficiently small, and the second shortest vector is sufficiently short. If two messages are available, the same attack can be applied to derive a linear congruence relating to the corresponding ephemeral keys.
The papers <cit.> and <cit.> describe attacks on DSA schemes using the LLL algorithm and one or two messages. In <cit.>, the combination of LLL with algorithms for finding integral points of two classes of conics gives a, provided that
at least one of the sets {a,k^-1 q}, {k,a^-1 q}, {a^-1 q,k^-1 q} is sufficiently small.
In <cit.>, the Lagrange Reduction algorithm is applied
on a 2-dimensional lattice defined by a signed message, and provides
two straight lines intersecting at (a, k). Similar attacks can be applied to the pairs (k^-1 q, k^-1a q) and (a^-1 q,a^-1k q). If two signed messages are available, the above two attacks can be applied to the equation relating the two ephemeral keys.
The article <cit.> presents an attack using Coppersmith's method to compute the secret key a. The attack works when a and k satisfy a specific inequality, and in this case, the secret key a can be efficiently computed.
The article <cit.> describes an attack that involves constructing a system of linear congruences using signed messages. This system has at most one unique solution below a certain bound, which can be computed efficiently. Thus, if the length of a vector containing the secret and ephemeral keys of a signed message is quite small, the secret key can be computed using the above system. The article <cit.> presents an improved version of this attack.
In <cit.>, the proposed attacks
take advantage using of the bits in the ephemeral key and the Fast Fourier Transform.
In <cit.>, it is shown that, using lattice reduction under some
heuristic assumptions, that partial
information about the nonces of multiple signatures can lead to recovery of the
full private key. The original approach to doing so is based on discrete
Fourier analysis techniques <cit.>.
A very important issue is the
attacks on cryptosystems based on the malicious modification of memory registers. These attacks may affect the randomness of the secret parameters,
and so, to force certain bits of the ephemeral
key to be equal, without their values being known. In <cit.>,
it is discussed how such attacks could occur in a real-life scenario.
Following the line of research from <cit.>, the authors of <cit.> focus on an attack scenario where ephemeral keys share specific bits, such as the least significant bits (LSB) and/or most significant bits (MSB), either within multiple blocks.
By eliminating the shared blocks
of bits between the ephemeral
keys, a lattice of dimension equal to the number of signatures is provided, which
contains a quite short vector with components that reveal the secret key.
Then, the LLL algorithm is used for the computation of this vector.
Note that these attacks are based on heuristic assumptions.
Later, in <cit.>, the authors further improved upon the attack proposed in <cit.> by providing a probabilistic attack with a success probability approaching 1 when the pair (δ,n) is appropriately selected, where n represents the number of signatures, and δ represents the number of shared bits in the ephemeral keys. This attack relies on a mild assumption regarding the hash function used in (EC)DSA.
§.§ Our Contribution
Our study builds on the research presented in <cit.>, and we present a deterministic attack that, although not always polynomial in complexity, proves to be highly efficient in practical scenarios. Instead of using methods like LLL, approximate, or exact CVP, which were employed in previous attacks, we use enumeration on a suitable lattice to find lattice vectors that are close to a specific target vector. From these solutions, we can readily extract the secret key to the system.
It is important to highlight that the attacks presented in <cit.> rely on heuristics assumptions that aim to force the presence of a vector containing the private key as a solution to the Shortest Vector Problem (SVP) in a relatively large lattice. In <cit.>, the authors provide a probabilistic approach to <cit.>, where an assumption for the hash function is made and the attack is modelled as a Closest Vector Problem (CVP). Due to the computational complexity of finding such a vector using a deterministic algorithm, an approximation algorithm can be used instead.
Our approach takes a different path. We calculate a bound for the distance between the vector of the lattice containing the private key and a target vector. Then, we leverage Kannan's enumeration algorithm to determine this vector and, consequently, extract the secret key. Our experiments demonstrate that the attack can be made highly efficient by appropriately selecting values for δ and n. Finally, we improve the results provided in <cit.>.
§.§ Our results
In the subsequent Theorem, we apply the framework suggested by <cit.>, which presupposes that we have access to a collection of signed messages with ephemeral keys that are shorter than q. These messages have some of their most and least significant bits in common, with a total of δ bits shared.
Suppose we have a (EC)DSA scheme
with a binary length ℓ prime number q and secret key a. Let m_j (j=0,…,n) be
messages signed with this scheme, (r_j,s_j) their signatures, and
k_j = ∑_i=1^ℓ k_j,i 2^ℓ-i (where k_j,i∈{0,1}) are
the corresponding ephemeral keys, respectively.
Set A_j = -r_js_j^-1 q.
Suppose that 0< k_j < q (j=0,…,n), and there are integers δ >0 and
0 ≤δ_L≤δ such that the
following conditions hold:
* k_0,i+1 = ⋯ = k_n,i+1 (i=1,…,δ-δ_L,ℓ-δ_L,
…,ℓ-1).
* For i = 0,…,n, set
C_i,j = (A_j-1 -A_i) 2^-δ_L q, (j=1,…,i), and
C_i,j = (A_j -A_i) 2^-δ_L q
(j=i+1,…,n).
The shortest vector of the lattice ℒ_i spanned by the vectors
(2^δ+1q,0,…, 0),…,
(0,…, 0, 2^δ+1q , 0),
(2^δ+1C_i,1, …, 2^δ+1C_i,n, 1)
has length
> 1/2 (2^δ+1q)^n/n+1.
Then, the secret key a can be computed in
𝒪(2^ℓ-δ n+2n n ( (nℓ)^c 2^𝒪(n)
+ℓ^4 2^n (n+1)^n+1/2))
bit operations, for some c > 0.
By the Gaussian heuristic <cit.> the length of the vectors of the lattice
ℒ is > q^n/(n+1).
Thus, the hypothesis (2) of Theorem <ref> will very often be
satisfied.
In the above complexity estimate, if ℓ≤δ n, then
the time complexity is polynomial in ℓ.
Roadmap. The paper is structured as follows:
Section 2 presents an auxiliary lemma that will prove crucial in the proof of Theorem <ref>.
Section 3 is dedicated to the proof of Theorem <ref>, providing a detailed explanation and justification.
In Section 4, an attack on (EC)DSA, derived from Theorem <ref>, is presented. Additionally, several experiments are conducted to illustrate the effectiveness of the attack.
Finally, Section 5 concludes the paper, summarizing the main findings and discussing potential avenues for future research.
§ LATTICES
Let ℬ = { b_1, …, b_n}⊂^n be a basis of ^n.
A n-dimensional lattice spanned by ℬ is the set
ℒ = {z_1 b_1+⋯ +z_n b_n/ z_1,…,z_n ∈}.
Recall that the scalar product of two vectors 𝐮 = (u_1,…,u_n)
and 𝐯 = (v_1,…,v_n) in ℝ is the quantity ⟨𝐮,𝐯⟩ = u_1v_1+⋯ + u_nv_n, and
the Euclidean norm of a vector v = (v_1,…,v_n) ∈^n
the quantity
𝐯 = ⟨𝐯,𝐯⟩^1/2 =
(v_1^2+⋯ +v_n^2)^1/2.
The Gram-Schmidt orthogonalisation (GSO) of the basis ℬ
is the orthogonal family
{𝐛_1^⋆,…,𝐛_n^⋆}
defined as follows:
𝐛_i^⋆ =
𝐛_i-∑_j=0^i-1μ_i,j𝐛_j^⋆, with μ_i,j = ⟨𝐛_i,𝐛_j^⋆⟩/𝐛_j^⋆^2 (j= 0,…,i-1).
Let L be a lattice. If K is a convex body
in ^n+1 symmetric about the origin, we denote by
λ_i(K,L) (i=1,…,n+1)
the ith successive minimum of K with respect to L
which it is defined as follows
λ_i(K, L) = inf{λ > 0/ (λ K) ∩ L contains i
linearly independent points}.
Further, we denote by s(L) the length of a shortest vector in L.
Let B_𝐯(R) be the closest ball of center 𝐯 and
radius R in ℝ^n+1 and L a lattice. Then,we have:
|B_𝐯(R)∩ L | < ( 2R/s(L)+1)^n+1.
Set
𝒟_𝐯(R) =
{𝐱-𝐲/ 𝐱,𝐲∈ B_𝐯(R)}.
Then, 𝒟_𝐯(R) is a convex body, symmetric about the
origin.
Then <cit.> implies:
|B_𝐯(R)∩ L | <
∏_i=1^n+1(1/λ_i(𝒟_𝐯(R),L)+1).
Let 𝐱,𝐲∈ B_𝐯(R). Then, we have:
𝐱-𝐲≤𝐱-𝐯+
𝐯-𝐲≤ 2R.
It follows that 𝒟_𝐯(R)⊆ B_0(2R),
and so we deduce
λ_1(B_0(2R),L) ≤λ_i(𝒟_𝐯(R),L) (i=1,…,n).
Further, we have
λ_1(B_0(2R),L) ≥ s(L)/2R.
Combining the inequalities (<ref>), (<ref>)
and (<ref>), we obtain:
|B_𝐯(R)∩ L | < ( 2R/s(L)+1)^n+1.
§ PROOF OF THEOREM 1.1
Let a be the secret key and k_j, j = 0,…,n the ephemeral keys. We put A_j = -r_js_j^-1 q and B_j = -h(m_j) s_j^-1 q for j = 0,…,n.
The signing equation for (EC)DSA provides that,
k_j+A_j a +B_j ≡ 0 ( q) (j=0,…,n).
Suppose first that k_0 = min{k_0,…,k_n}.
We set δ_M=δ-δ_L. From the hypothesis of the Theorem we get
z_j=k_j-k_0=ε 2^ℓ-δ_M-1+⋯+ε' 2^δ_L,
for some ε, ε'∈{0,1}.
Since z_j>0 we get 0<z_j<2^ℓ-δ_M and there exists positive integer z_j' such that
z_j = 2^δ_Lz^'_j
Furthermore, we set
C_j = (A_j-A_0)2^-δ_L q and
D_j = (B_j-B_0)2^-δ_L q.
From (<ref>) we have the congruences:
z_j^'+C_j a +D_j ≡ 0 ( q) (j=1,…,n).
Since z_j^' is positive, there is a positive integer c_j such that
-C_ja-D_j+c_jq= z_j^'.
Thus, we obtain:
0 < c_jq-C_j a-D_j < 2^ℓ-δ.
It follows that
-2^ℓ-δ-1 < c_jq-C_j a-D_j-2^ℓ-δ-1 < 2^ℓ-δ-1,
whence we get
0 < |c_jq-C_j a-D_j-2^ℓ-δ-1| < 2^ℓ-δ-1.
Therefore, we have:
0 < |c_jq2^δ+1 -C_j2^δ+1 a-D_j2^δ+1-2^ℓ|
< 2^ℓ.
We consider the lattice ℒ spanned by the rows of the matrix
𝒥 = ( [ 2^δ+1q 0 0 … 0 0; 0 2^δ+1q 0 … 0 0; 0 0 2^δ+1q … 0 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 0 … 2^δ+1q 0; 2^δ+1C_1 2^δ+1C_2 2^δ+1C_3 … 2^δ+1C_n 1 ]).
The vectors of the lattice ℒ are of the form
(2^δ+1(qx_1+x_n+1C_1),2^δ+1(qx_2+x_n+1C_2),…,2^δ+1(qx_n+x_n+1C_n),x_n+1),
for some integers x_1,…,x_n+1. By setting
(x_1,…,x_n+1)=(c_1,…,c_n,-a), we get the lattice vector
𝐮 =
(2^δ+1(c_1q-C_1a),…,2^δ+1(c_nq-C_na),-a).
Further we consider the vector in the span of ℒ,
𝐯 = (D_12^δ+1+2^ℓ,…,2^δ+1D_n+2^ℓ,0).
Now, we have
u- v=(2^δ+1(qc_1-C_1a-D_1)-2^ℓ,…,2^δ+1(qc_n-C_na-D_n)-2^ℓ,-a),
and inequalities (<ref>) yield:
𝐮-𝐯 < 2^ℓ√(n+1).
Put R = 2^ℓ√(n+1). Then 𝐮∈ B_𝐯(R).
Next, we compute a LLL-reduced
basis for ℒ, say ℬ = {𝐛_1,…,𝐛_n+1}. This can be done in time 𝒪(n^6 (log q)^3). By hypothesis (2) of Theorem,
we have:
s(ℒ) > 1/2 (2^δ+1 q)^n/n+1.
Let {𝐛_1^*,…,𝐛_n+1^*} the Gram-Schmidt orthogonalisation of ℬ.
By
<cit.>, we get:
4 b_i^*^2 ≥ 2 b_i-1^*^2 ≥ b_i-1^2 ≥ s(L)^2
Thus, we obtain:
1/4 (2^δ+1q)^n/n+1≤𝐛_i^* (i=1,…,n+1).
Next, using Kannan's enumeration algorithm <cit.>, we compute
all the elements of B_𝐯(R)∩ℒ.
Combining <cit.> with the inequality (<ref>),
we obtain that the bit complexity of the procedure is
(nlog q)^c 2^𝒪(n)(2^ℓ+2/(2^δ+1q)^n/n+1)^n+1 ,
where c is a constant >0.
Then we check whether the last coefficient of
𝐮∈ B_𝐯(R)∩ℒ is the minus of the secret
key -aq. Every such operation needs 𝒪((log q)^4) bit operations
<cit.>. If none of the elements of
𝐮∈ B_𝐯(R)∩ℒ gives the secret key, then
we repeat the procedure assuming that k_1 = min{k_0,…,k_n}, and we continue until
we found the secret key.
By Lemma <ref>, we have:
|B_𝐯(R)∩ℒ | < (
2^ℓ+2√(n+1)/ (2^δ+1q)^n/n+1 +1)^n+1.
Thus, the overall bit complexity of the computation of a is
𝒪(n(nlog q)^c 2^𝒪(n)(2^ℓ+2/(2^δ+1q)^n/n+1)^n+1 +n
(
2^ℓ+2√(n+1)/ (2^δ+1q)^n/n+1 +1)^n+1 (log q)^4),
whence the result.
§ THE ATTACK
The proof of Theorems 1.1 yields the following attack:
ATTACK-DSA
Input: Messages m_j (j=0,…,n) and
(r_j,s_j) their (EC)DSA signatures and integers δ >0 and
0 ≤δ_L≤δ and the public key (p,q,g,R)
(resp. (E,p,q,P,Q)).
Output: The private key a.
* For j=0,…, n compute A_j = -r_is_i^-1 q,
B_j = -h(m_j) s_j^-1 q.
* For i=0,…,n,
* For j=1,…,i compute
C_i,j = (A_j-1 -A_i) 2^-δ_L q, D_i,j = (B_j-1 -B_i) 2^-δ_L q,
and for j= i+1,…,n compute
C_i,j = (A_j -A_i) 2^-δ_L q,
D_i,j = (B_j -B_i) 2^-δ_L q.
* Consider the lattice ℒ_i spanned by the rows of the matrix
J_i = ( [ 2^δ+1q 0 0 … 0 0; 0 2^δ+1 q 0 … 0 0; 0 0 2^δ+1 q … 0 0; ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 0 … 2^δ+1 q 0; 2^δ+1C_i,1 2^δ+1C_i,2 2^δ+1C_i,3 … 2^δ+1C_i,n 1 ])
and compute a LLL-basis ℬ_i for ℒ_i.
* Consider the vector
𝐯_i = (2^δ+1D_i,1+2^ℓ,…,2^δ+1D_i,n+2^ℓ,0), and
using Kannan's enumeration algorithm
with basis ℬ_i, compute all
𝐮∈ℒ_i satisfying
𝐮-𝐯_i < 2^ℓ√(n+1).
* Check whether the last coordinate of 𝐮 say u_n+1 satisfies g^-u_n+1≡ Rq (resp. P(-u_n+1) = Q).
If it is so, then return the secret key -u_n+1q=a.
For the Pseudocode of Kannan's Enumeration Algorithm, one
can see <cit.>.
Supposing that condition (2) is satisfied, taking n quite small and nδ≥ℓ, Theorem <ref> implies that the attack is polynomial
in ℓ. Furthermore, if s(L) is closed to the Gauss heuristic, then the upper bound for the number of points of B_𝐯(R)∩ℒ will be the smaller possible, and so
it is expect that the attack will be quite efficient.
Experiments.
We conducted a thorough analysis of our experiments, and we compared our results with those presented by Gomez et al. <cit.>. Our findings indicate a significant improvement in almost all cases. Our experiments were conducted on a Linux machine with an i5-12400 CPU, using Sagemath 9.8 <cit.>. We made the assumption that we already knew the minimum ephemeral key. However, in the general case, where the minimum key is unknown, we would need to perform n executions, where n+1 represents the number of signatures. This worst-case scenario would require multiplying the execution time of each experiment by n. Overall, our results demonstrate a notable improvement compared to the previous findings (see the Table below). Finally, we have successfully found the secret key even when the shared bits in the ephemeral keys are only 5. Remarkably, in this case, we only needed a minimum of 58 signatures. It is worth noting that in <cit.>, no successful attack was provided for the specific scenario where δ=5.
§ CONCLUSION
Attacks based on the malicious modification of memory registers
is a topic of high importance, since it
may affect the randomness of the secret parameters by forcing a limited number of bits to a certain value, which can be unknown to the attacker.
In this context, we developed a deterministic attack on the DSA schemes,
providing that several signatures are such that the corresponding
ephemeral keys share a number of bits without knowing their value.
Our attack is deterministic, meaning that it always produces a result for a given input every time. However, it is important to note that while the attack is deterministic, it may not always be practical to execute. Deterministic attacks on the (EC)DSA are relatively rare, as they typically rely on heuristic assumptions.
While deterministic attacks on (EC)DSA, are rare, our attack demonstrates practical feasibility in specific scenarios, surpassing previous results, (see Table <ref>). However, it is important to note that the practicality and effectiveness of our attack may vary depending on the specific choice of (δ,n).
Acknowledgement
The author, Marios Adamoudis is co-financed by Greece
and the European Union (European Social Fund-ESF) through the Operational
Programme ”Human Resources Development, Education and Lifelong Learning”
in the context of the Act ”Enhancing Human Resources Research Potential by
undertaking a Doctoral Research” Sub-action 2: IKY Scholarship Programme for
PhD candidates in the Greek Universities.
99
marios M. Adamoudis, K. A. Draziotis and D. Poulakis, Enhancing a DSA attack, CAI 2019, p. 13-25. LNCS 11545, Springer 2019.
Aranha
D. F. Aranha, F. R. Novaes, Akira Takahashi, M. Tibouchi,
and Y. Yarom. LadderLeak: Breaking ECDSA with less than one bit of
nonce leakage. In Jay Ligatti, Xinming Ou, Jonathan Katz, and Giovanni
Vigna, editors, ACM CCS 2020, pages 225-242. ACM Press, November 2020.
Bellare M. Bellare, S. Goldwasser and Micciancio,
“Pseudo-random" number generation within cryptographic
algorithms: the DSS case. In Proc. of Crypto '97, LNCS 1294
IACR, Palo Alto, CA. Springer-Verlag, Berlin 1997.
Blake I. F. Blake and T. Garefalakis,
On the security of the digital signature algorithm.
Des. Codes Cryptogr., 26, no. 1-3 (2002), 87-96.
Bleichenbacher
D. Bleichenbacher. On the generation of one-time keys in DL signature
schemes. In Presentation at IEEE P1363 working group meeting, page 81,
2000.
Draziotis K. A. Draziotis and D. Poulakis, Lattice attacks on DSA schemes based on Lagrange's algorithm.
5th international Conference on Algebraic Informatics, CAI 2013. Berlin: Springer. LNCS 8080, 119-131 (2013).
Draziotis2 K. A. Draziotis, (EC)DSA lattice attacks based on Coppersmith's method, Information Processing Letters 116(8), Elsevier (2016), Pages 541-545.
ElGamal T. ElGamal, A public key cryptosystem and a signature scheme
based on discrete logarithm, IEEE
Transactions on Information Theory, 31 (1985), 469-472.
fips FIPS PUB 186-3, Federal Information Processing Standards
Publication, Digital Signature Standard (DSS).
Faugere J. -L. Faugère, C. Goyet, and G. Renault, Attacking
(EC)DSA Given Only an Implicit Hint, Selected Area of Cryptography, LNCS 7707, p. 252–274, Springer-Verlag, Berlin - Heidelberg 2013.
Gomez Ana I. Gomez, D. Gomez-Perez, and G. Renault, A probabilistic analysis on a lattice attack against DSA. Des. Codes Cryptogr. 87, 2469-2488 (2019).
Hanrot G. Hanrot and D. Stehlé, Improved analysis of kannan’s shortest lattice vector algorithm. In
Proceedings of Crypto, LNCS 4622, 170-186. Springer, 2007.
Hanrot2 G. Hanrot, X. Pujol and D. Stehlé,
Algorithms for the shortest and closest lattice vector problems.
Chee, Yeow Meng (ed.) et al., Coding and cryptology. Third international workshop, IWCC 2011, Qingdao, China, May 30 – June 3, 2011. Proceedings. Berlin: Springer. Lecture Notes in Computer Science 6639, 159-190 (2011).
Hoffstein J.
Hoffstein, J. Pipher, H. H. Silverman, An introduction to mathematical cryptography. 2nd ed.
Undergraduate Texts in Mathematics. New York, NY: Springer 2014.
Howgrave N. A. Howgrave-Graham and N. P. Smart, Lattice
Attacks on Digital Signature Schemes, Des. Codes Cryptogr.
23 (2001) 283-290.
Johnson D. Johnson, A. J. Menezes and S. A. Vastone, The
elliptic curve digital signature algorithm (ECDSA), Intern.
J. of Information Security, 1 (2001) 36-63.
Koblitz N. Koblitz, A. J. Menezes and S. A. Vastone,
The state of elliptic curve cryptography, Des. Codes
Cryptogr. 19 (2000), 173-193.
Koblitz2 N. Koblitz and A. J. Menezes, A survey of Public-Key
Cryptosystems,
SIAM REVIEW, 46, No. 4 (2004), 599-634.
Leadbitter P.J. Leadbitter, D. Page, N.P. Smart. Attacking DSA Under a Repeated Bits Assumption. In: Joye, M., Quisquater, JJ. (eds) Cryptographic
Hardware and Embedded Systems - CHES 2004. CHES 2004. Lecture Notes in Computer
Science, vol 3156, (2004) 428-440. Springer, Berlin, Heidelberg.
Lenstra A. K. Lenstra, H. W. Lenstra Jr., and L. Lovász, Factoring
polynomials
with rational coefficients, Math. Ann., 261 (1982), 513-534.
Malikiosis R.-D. Malikiosis,
Lattice-point enumerators of ellipsoids, Combinatorica 33, No. 6 (2013) 733-744.
Menezes A. J. Menezes, P. C. van Oorschot and S. A.
Vanstone, Handbook of Applied Cryptography, CRC Press, Boca
Raton, Florida, 1997.
Micciancio D. Micciancio and P. Voulgaris. A deterministic single
exponential time algorithm
for most lattice problems based on Voronoi cell computations. In Proc. of
STOC, ACM, (2010) pages 351-358.
Mulder1
E. De Mulder, M. Hutter, M. E. Marson, and P. Pearson. Using Bleichenbacher s solution
to the Hidden Number Problem to attack nonce leaks in 384-bit ECDSA. In Cryptographic Hardware
and Embedded Systems-CHES 2013, 435-452. Springer, 2013.
Mulder2
E. De Mulder, M. Hutter, M. E. Marson, and P. Pearson. Using Bleichenbacher's solution
to the hidden number problem to attack nonce leaks in 384-bit ecdsa: extended version. Journal of
Cryptographic Engineering, 4(1):33-45, 2014.
National National Institute of Standards and Technology
(NIST). FIPS Publication 186: Digital Signature
Standard. May 1994.
Nguyen P. Nguyen and I. E. Shparlinski, The Insecurity
of the Digital Signature Algorithm with Partially Known Nonces,
J. Cryptology, 15 (2002), 151-176.
Nguyen2 P. Nguyen and I. E. Shparlinski,
The Insecurity of the Elliptic Curve Digital Signature Algorithm
with Partially Known Nonces, Des. Codes Cryptogr. 30,
(2003), 201-217.
Poulakis D. Poulakis, Some Lattice Attacks on DSA and ECDSA, Applicable Algebra in Engineering, Communication and Computing,
22, (2011), 347-358.
Poulakis1 D. Poulakis, New lattice attacks on DSA schemes,
J. Math. Cryptol. 10 (2) (2016), 135–144.
sage Sage Mathematics Software, The Sage Development Team. <http://www.sagemath.org>.
Sun
C. Sun, T. Espitau, M. Tibouchi, and M. Abe, Guessing Bits: Improved
Lattice Attacks on (EC)DSA with Nonce Leakage,
IACR Transactions on Cryptographic Hardware and Embedded Systems,
ISSN 2569-2925, Vol. 2022, No. 1, pp. 391-413.
Zheng Z. Zheng, Modern Cryptography, Volume 1,
Springer 2021.
|
http://arxiv.org/abs/2307.06241v1 | 20230712153120 | New Three and Four-Dimensional Toric and Burst-Error-Correcting Quantum Codes | [
"Cibele Cristina Trinca",
"Reginaldo Palazzo Jr.",
"Ricardo Augusto Watanabe",
"Clarice Dias de Albuquerque",
"José Carmelo Interlando",
"Antônio Aparecido de Andrade"
] | cs.IT | [
"cs.IT",
"math.IT",
"quant-ph"
] |
New Three and Four-Dimensional Toric and Burst-Error-Correcting Quantum Codes
Cibele Cristina TrincaThe author is with the Department of Biotechnology and Bioprocess Engineering, Federal University of Tocantins, Gurupi-TO, Brazil (e-mail: [email protected])., Reginaldo Palazzo Jr.The author is with the School of Electrical and Computer Engineering, State University of Campinas, Brazil (e-mail: [email protected])., Ricardo Augusto WatanabeThe author is with the School of Mathematics, Statistics and Scientific Computing, State University of Campinas, Brazil (e-mail: [email protected]).,
Clarice Dias de AlbuquerqueThe author is with the Science and Technology Center, Federal University of Cariri, Juazeiro do Norte, Brazil (e-mail: [email protected])., J. Carmelo InterlandoThe author is with the Department of Mathematics and Statistics, San Diego State University, San Diego, CA, USA (e-mail: [email protected]). and
Antonio Aparecido de AndradeThe author is with the Department of Mathematics, São Paulo State University, Brazil (e-mail: [email protected]).
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Ongoing research and experiments have enabled quantum memory to realize the storage of qubits. On the other hand, interleaving techniques are used to deal with burst of errors. Effective interleaving techniques for combating burst of errors by using classical error-correcting codes have been proposed in several articles found in the literature, however, to the best of our knowledge, little is known regarding interleaving techniques for combating clusters of errors in topological quantum error-correcting codes. Motivated by that, in this work, we present new three and four-dimensional toric quantum codes which are featured by lattice codes and apply a quantum interleaving method to such new three and four-dimensional toric quantum codes. By applying such a method to these new codes we provide new three and four-dimensional quantum burst-error-correcting codes. As a consequence, new three and four-dimensional toric and burst-error-correcting quantum codes are obtained which have better information rates than those three and four-dimensional toric quantum codes from the literature. In addition to these proposed three and four-dimensional quantum burst-error-correcting codes improve such information rates, they can be used for burst-error-correction in errors which are located, quantum data stored and quantum channels with memory.
Mathematics Subject Classification (2020): 81P45, 94A40, 94B15, 94B20.
Index Terms: Toric quantum code, hypercubic lattice, lattice code, quantum burst-error-correction, quantum interleaving.
§ INTRODUCTION
Quantum system interactions with the environment around can destroy the superposition states causing loss of information. This process is called decoherence. This problem can be overcome by isolating the system from its surroundings. However, for larger systems, this task is quite complicated. One way to overcome such a difficulty is to use quantum error-correcting codes. These codes work by encoding the quantum states in order to make them resistant to the action of noise, and then decoding them at the moment in which the states are to be recovered.
Constructions of quantum error-correcting codes are strongly based on their classical counterparts; however, there are some fundamental differences between classical and quantum information, e.g., (i) copying a qubit is physically impossible <cit.> and (ii) measurements destroy the quantum state in most cases, which in turn prevent its recovery <cit.>.
Within the class of stabilizer quantum error-correcting codes, Kitaev introduced an alternative language using topology <cit.>. His proposal was to use certain properties of particles confined to a plane to perform topological quantum computation. That terminology comes from the fact that those properties are related to the topology of the physical system. Continuous deformations caused by the environment can alter those properties and, as consequence, we would naturally have quantum computation resistant to errors.
Kitaev started that investigation via toric codes <cit.>. To construct them, qubits are associated to the edges of a square lattice on the torus; hence, the total number of edges is equal to the length of the code. The stabilizer operators are related to the vertices and the faces of the square lattice, and the coded qubits are determined according to the genus of the surface. Lastly, the distance of the code is determined by the homology group of the surface. Toric codes can be generalized to a class known as topological quantum codes by considering other two-dimensional orientable surfaces different from the torus.
In the quantum regime, quantum errors can be independent or correlated in space and time. Hence there are counterparts of quantum random error-correcting codes <cit.> and quantum burst error-correcting codes <cit.>. Analogously to the classical case, quantum channels usually have memory <cit.> or introduce errors which are located <cit.>, that is, quantum burst errors.
The construction and investigation of quantum burst error-correcting codes have received far less attention compared to the development of standard quantum error-correcting codes or entanglement-assisted quantum error-correcting codes <cit.>.
In quantum computing, quantum memory is the quantum-mechanical version of ordinary computer memory. Whereas ordinary memory stores information as binary states (represented by “1"s and “0"s), quantum memory stores a quantum state for later retrieval. These states hold useful computational information known as qubits. Unlike the classical memory of everyday computers, the states stored in quantum memory can be in a quantum superposition, giving much more practical flexibility in quantum algorithms than classical information storage.
Quantum memory is essential for the development of many devices in quantum information processing, including a synchronization tool that can match the various processes in a quantum computer, a quantum gate that maintains the identity of any state, and a mechanism for converting predetermined photons into on-demand photons. Quantum memory can be used in many aspects, such as quantum computing and quantum communication. Continuous research and experiments have enabled quantum memory to realize the storage of qubits <cit.>.
Our contribution in this work is to present new three and four-dimensional toric quantum codes from lattice codes and apply a quantum interleaving method to such three and four-dimensional toric quantum codes. By applying such a method to these new codes we provide new three and four-dimensional quantum burst-error-correcting codes. As a consequence, new three and four-dimensional toric and burst-error-correcting quantum codes are obtained which have better information rates than those three and four-dimensional toric quantum codes from the literature.
This paper is organized as it follows. Sections II and III review previous results of lattice theory and toric quantum codes, respectively. Sections III and IV provide new three and four-dimensional toric quantum codes from lattice codes, respectively. In Section V a quantum interleaving method is applied to such new three and four-dimensional toric quantum codes to obtain new three and four quantum burst-error-correcting codes.
§ LATTICE
The background material presented in this section can be found in <cit.>.
Let { a_1, …, a_n} be a basis for the n-dimensional real Euclidean space, ℝ^n, where n is a positive integer. An n-dimensional lattice Λ is the set of all points of the form u_1 a_1 + ⋯ + u_n a_n with u_1,…, u_n being integers. Thus, Λ is a discrete additive subgroup of ℝ^n. This property leads to the study of subgroups (sublattices) and coset decompositions (partitions). An algebraic way to obtain sublattices from lattices is via a scaling matrix A with integer entries. Given a lattice Λ, a sublattice Λ ^'=A Λ can be obtained by transforming each vector λ∈Λ to λ ^'∈Λ ^' according to λ ^' = A λ.
Every building block that fills the entire space with one lattice point in each region is called a fundamental region of the lattice Λ. There are several ways to choose a fundamental region for a lattice Λ, however the volume of the fundamental region is uniquely determined by Λ <cit.>. Let V(Λ) denote the volume of a fundamental region of the n-dimensional lattice Λ. For a sublattice Λ ^' = A Λ, we have that V(A Λ)V(Λ) = | A| and the set of the cosets of Λ' in Λ defines a lattice code <cit.>.
§ TORIC QUANTUM CODES
The material presented in this section can be found in <cit.>. A quantum error-correcting code (QEC) is the image of a linear mapping from the 2^k-dimensional Hilbert space H^k to the 2^n-dimensional Hilbert space H^n, where k<n. The codewords are the vectors in the 2^n-dimensional space. The minimum distance d of a quantum error-correcting code C is the minimum distance between any two distinct codewords, that is, the minimum Hamming weight of a nonzero codeword. A quantum error-correcting code C of length n, dimension k and minimum distance d is denoted by [[n,k,d]]. A code with minimum distance d is able to correct up to t errors, where t=⌊d-1/2⌋ <cit.>.
A stabilizer code C is the simultaneous eigenspace with eigenvalue 1 comprising all the elements of an Abelian subgroup S of the Pauli group P_n, called the stabilizer group. The elements of the Pauli group on n qubits are given by
P_n={± I, ± iI, ± X, ± iX, ± Y, ± iY, ± Z, ± iZ}^⊗ n, where
I=( [ 1 0; 0 1; ]), X=σ_x=( [ 0 1; 1 0; ]),
Y=σ_y=( [ 0 -i; i 0; ]) and Z=σ_z=( [ 1 0; 0 -1; ]).
Thus, C={|ψ⟩∈ H^n | M|ψ⟩ = |ψ⟩, ∀ M∈ S } <cit.>.
Kitaev's toric codes form a subclass of stabilizer codes and they are defined in a q× q square lattice of the torus (Figure 1). Qubits are in one-to-one correspondence with the edges of the square lattice. The parameters of this class of codes are [[2q^2,2,q]], where the code length n equals the number of edges |E|=2q^2 of the square lattice. The number of encoded qubits is dependent on the genus of the orientable surface. In particular, codes constructed from orientable surfaces gT (connected sum of g tori T) encode k=2g qubits. Thus, codes constructed from the torus, an orientable surface of genus 1, have k=2 encoded qubits. The distance is the minimum between the number of edges contained in the smallest homologically nontrivial cycle of the lattice and the number of edges contained in the smallest homologically nontrivial cycle of the dual lattice. Recall that the square lattice is self-dual and a homologically nontrivial cycle is a path of edges in the lattice which cannot be contracted to a face. Therefore the smallest of these two paths corresponds to the orthogonal axes either of the lattice or of the dual lattice. Consequently, d=q <cit.>.
The stabilizer operators are associated with each vertex and each face of the square lattice (lattice) (Figure <ref>). Given a vertex v∈ V, the vertex operator A_v is defined by the tensor product of σ_x – corresponding to each one of the four edges which have v as a common vertex – and the operator identity acting on the remaining qubits. Analogously, given a face f∈ F, the face operator B_f is defined by the tensor product σ_z – corresponding to each one of the four edges forming the boundary of the face f – and the operator identity acting on the remaining qubits. In particular, from <cit.>,
A_v=⊗_j∈ Eσ_x^δ (j∈ E_v) and B_f=⊗_j∈ Eσ_z^δ (j∈ E_f),
where δ is the Kronecker delta.
The toric code consists of the space fixed by the operators A_v and B_f and it is given as
C={|ψ⟩∈ H^n | A_v|ψ⟩ = |ψ⟩ and B_f|ψ⟩ = |ψ⟩ , ∀ v,f }.
The dimension of C is 4, that is, C encodes k=2 qubits.
§ NEW THREE-DIMENSIONAL TORIC QUANTUM CODE FROM A LATTICE CODE
Three-dimensional toric quantum codes are studied in <cit.>. Under the algebraic point of view, such toric quantum codes can be characterized as the group consisting of the cosets of q ℤ^3 in ℤ^3 <cit.>, which in turn is isomorphic to ℤ_q×ℤ_q×ℤ_q = ℤ_q^3, where q is a positive integer. The identifications of the opposite faces of the region delimited by ℤ_q×ℤ_q×ℤ_q result in its identification with the three-dimensional torus denoted by T^3 which has genus g=1; for the sake of simplicity, we call this region lattice or q× q × q = q^3 cubic lattice. The volume associated with the lattice ℤ_q×ℤ_q×ℤ_q is q^3.
Since each face belongs simultaneously to two cubes of the q^3 cubic lattice (lattice), there are
3q^3 faces, that is, n=3q^3 qubits, where n is the length of the code that is given by the number of faces of the q^3 cubic lattice. In the construction of these three-dimensional toric quantum codes <cit.>, such qubits are in a biunivocal correspondence with the faces of the q^3 cubes of the q^3 cubic lattice. Since ℤ^3 is a self-dual lattice, then the code distance which is defined as being the minimum number of faces in the q^3 cubic lattice between two codewords is equal to q <cit.>. Consequently, this class of codes has parameters [[3q^3,3,q]].
For the rest of this section, let q=7. The goal of this section is to construct a three-dimensional toric quantum code by using the fundamental region of a sublattice of the lattice ℤ^3 that has volume 7, since 7 divides the volume q^3=7^3 of the lattice ℤ_7×ℤ_7×ℤ_7.
The authors in <cit.> construct a classical single-error-correcting code in dimension three in the 7^3 cubic lattice which has 49 codewords. Each codeword of this classical code has the Lee sphere of radius 1 in 3 dimensions (heptacube) as being its fundamental region and the cube in the center of the heptacube corresponds geometrically to the codeword. Also in <cit.> it is conjectured that this is the only case for which a close-packing exists in dimension 3.
Since the 7^3 cubic lattice has a finite number of qubits, we use operations modulo 7 to guarantee that the qubits of a given coset (heptacube) remain inside the 7^3 cubic lattice.
The 7^3 cubic lattice consists of seven 7× 7 arrays which are 7× 7 cross-sections, where each square of these arrays means a cube. It is shown in <cit.> that the seven codewords of each 7× 7 array can be generated by the vector (0 1 4), that is, we can put them apart by one unit to the right in the horizontal direction and four units down in the vertical one. Now to rise in the third dimension to change from one array to the one above to continue the construction of the corresponding codewords we use the vector (1 0 2) as the corresponding generator. Consequently, by using these vectors, we can construct the corresponding 49 codewords from the 7^3 cubic lattice and the 7× 7 cross-sections can be labeled by the numbers 0, 1, 2, 3, 4, 5 and 6.
By knowing these generator vectors, we obtain the following result.
The classical single-error-correcting code in the 7^3 cubic lattice can be characterized as a lattice code.
The authors in <cit.> provide for q=7 the 2× 2 matrix (
[ 2 1; 1 4; ]) whose line vectors generate the codewords of each 7× 7 array (cross-section) from the 7^3 cubic lattice. Therefore, since the vector (1 0 2) is another generator of the classical single-error-correcting code from the 7^3 cubic lattice, we complete this matrix with the generator (1 0 2) to obtain the 3× 3 matrix A=(
[ 0 2 1; 0 1 4; 1 0 2; ]) to the dimension 3 that, consequently, generates the sublattice A ℤ^3 from ℤ^3. Therefore, V(A ℤ^3)V(ℤ^3) = | A| = 7, since | A|=7.
As V(ℤ^3)=1, then V(A ℤ^3)=7, that is, the volume of the fundamental region of the sublattice A ℤ^3 of the lattice ℤ^3 is 7. From there, it is possible to observe that V(7 ℤ^3)V(ℤ^3)=7^3>7=V(A ℤ^3)V(ℤ^3) and, thus, V(7 ℤ^3)=7^3>7=V(A ℤ^3).
Hereupon it is possible to show that A ℤ^3⊃ 7 ℤ^3, in fact: let (7α,7β,7γ), where α,β,γ∈ℤ, an arbitrary element of 7 ℤ^3. Then it is needed to show that there exists v=(x,y,z)∈ℤ^3 such that v· A=(7α,7β,7γ). Thenceforth, by straightforward computations, we obtain the solution x=2α+4β-γ, y=-4α-β+2γ and z=7α. Consequently, there exists v=(x,y,z)∈ℤ^3 such that v· A=(7α,7β,7γ) and, therefore, A ℤ^3⊃ 7 ℤ^3.
Now since V(7 ℤ^3)=7^3>7=V(A ℤ^3) and A ℤ^3⊃ 7 ℤ^3, then we have the following nested lattice chain
ℤ^3⊃ A ℤ^3⊃ 7 ℤ^3.
From <cit.> we obtain |ℤ^3 / 7 ℤ^3| =V(7 ℤ^3)V(ℤ^3)=7^3 and |ℤ^3 / A ℤ^3|=V(A ℤ^3)V(ℤ^3)=7, consequently, | A ℤ^3 / 7 ℤ^3|=V(7 ℤ^3)V(A ℤ^3)=7^37=7^2=49. Therefore the lattice quotient A ℤ^3 / 7 ℤ^3 is the group consisting of the 49 cosets of 7 ℤ^3 in A ℤ^3 and the set of these 49 cosets defines a lattice code.
Now as the line vectors of the matrix A which generates the sublattice A ℤ^3 are the generators of the classical code from the 7^3 cubic lattice and the lattice quotient A ℤ^3 / 7 ℤ^3 provides operations modulo 7 over the respective vectors, then there exists a natural group isomorphism between the classical code and the lattice code A ℤ^3 / 7 ℤ^3, consequently, such a classical code is characterized by the lattice code A ℤ^3 / 7 ℤ^3.
From this lattice code a new three-dimensional toric quantum code can be featured; in fact, the corresponding three-dimensional toric quantum code which is associated with the fundamental region as being the heptacube is established in the same way as it is in the three-dimensional toric quantum code for q=7 studied in <cit.>. Thenceforward, as in <cit.>, the respective code length is decreased and it is given by the number of faces of the heptacube, that is, n=3q=21, since the heptacube has volume q=7 and each face belongs simultaneously to two cubes of the q^3=7^3=343 cubic lattice (lattice). The code dimension is k=3, since this three-dimensional toric quantum code is constructed on the T^3 torus which has genus g=1. From the fact that the ℤ^3-lattice is self-dual, the code distance is defined as the minimum number of faces in the q^3=7^3=343 cubic lattice between two codewords. Such a distance is called Mannheim distance and it is given by d_M = min {| x | + | y | + | z | | (x,y,z) ∈𝒞}, where 𝒞 indicates the set of the 49 codewords and w_M (x,y,z) = | x | + | y | + | z | is known as the Mannheim weight of (x,y,z).
The minimum Mannheim distance of the corresponding three-dimensional toric quantum code is given by 3.
From <cit.>, the line vectors of the 2× 2 matrix (
[ 2 1; 1 4; ]) generate the codewords of each 7× 7 array (cross-section) from the 7^3 cubic lattice. On the other hand, in <cit.>, Lemma 1, Lemma 2 and Example 1 provide the minimum Mannheim distance of the codewords of each 7× 7 array (cross-section) which is given by 3.
As we have seen previously we use the vector (1 0 2) which is the corresponding generator in the vertical direction to rise in the third dimension to change from one array to the one above to continue the construction of the corresponding codewords.
Since the minimum Mannheim distance of the codewords of each 7× 7 array (cross-section) is given by 3 and the Mannheim weight of (1 0 2) is 3, then the minimum Mannheim distance of the corresponding three-dimensional toric quantum code is given by 3.
From there, the parameters of the new three-dimensional toric quantum code are [[3q=21,3,3]].
Consequently, under the algebraic point of view, such a new three-dimensional toric quantum code can be characterized as the group consisting of the cosets of 7 ℤ^3 in A ℤ^3.
The duality of the lattice in which a toric quantum code is constructed influences the error correction pattern <cit.>, that is, if the corresponding lattice is self-dual, then the quantum channel without memory is symmetric; but if the corresponding lattice is not self-dual, then the quantum channel without memory is not symmetric. Therefore, as the ℤ^3-lattice is self-dual, then the respective quantum channel without memory is symmetric.
§ NEW FOUR-DIMENSIONAL TORIC QUANTUM CODE FROM A LATTICE CODE
Four-dimensional toric quantum codes are studied in <cit.>. Under the algebraic point of view, such toric quantum codes can be characterized as the group consisting of the cosets of q ℤ^4 in ℤ^4 <cit.>, which in turn is isomorphic to ℤ_q×ℤ_q×ℤ_q×ℤ_q = ℤ_q^4 (4-dimensional hypercube), where q is a positive integer. The identifications of the opposite hyperfaces of the region delimited by ℤ_q^4 result in its identification with the four-dimensional torus denoted by T^4 which has genus g=1; for the sake of simplicity, we call this region lattice or q× q × q × q = q^4 hypercubic lattice. The hypervolume associated with the lattice ℤ_q^4 is q^4.
By following the reasoning of the last section, there are 6q^4 hyperfaces in the q^4 hypercubic lattice, that is, n=6q^4 qubits, where n is the length of the code that is given by the number of hyperfaces of the q^4 hypercubic lattice. In the construction of these four-dimensional toric quantum codes <cit.>, such qubits are in a biunivocal correspondence with the hyperfaces of the q^4 hypercubes of the q^4 hypercubic lattice. Since ℤ^4 is a self-dual lattice, then the code distance which is defined as being the minimum number of hyperfaces in the q^4 hypercubic lattice between two codewords is equal to q^2 <cit.>. Consequently, this class of codes has parameters [[6q^4,6,q^2]].
In this section, let q=9. The goal of this section is to construct a four-dimensional toric quantum code by using the fundamental region of a sublattice of the lattice ℤ^4 that has hypervolume 9, since 9 divides the hypervolume q^4=9^4 of the lattice ℤ_9^4.
In <cit.> the authors construct a classical single-error-correcting code in dimension four in the 9^4 hypercubic lattice which has 729=9^3=q^3 codewords. Each codeword of this classical code has the Lee sphere of radius 1 in 4 dimensions as being its fundamental region and the hypercube in the center of such Lee sphere corresponds geometrically to the codeword. Therefore each codeword can be featured as a central hypercube which has 8 hyperfaces to which another hypercube has been affixed to each of its hyperfaces. In <cit.> it is also conjectured that this is the only case for which a close-packing exists in dimension 4.
Since the 9^4 hypercubic lattice has a finite number of qubits, we use operations modulo 9 to guarantee that the qubits of a given coset (Lee sphere of radius 1 in 4 dimensions (fundamental region)) remain inside the 9^4 hypercubic lattice.
The 9^4 hypercubic lattice consists of nine 9× 9× 9=9^3 hypertorus which are 9× 9× 9=9^3 cross-sections. Now each 9^3 hypertorus consists of nine 9× 9 arrays, where each square of these arrays means a cube. It is shown in <cit.> that the 9 codewords of each 9× 9 array can be generated by the vector (0 0 1 6), that is, we can put them apart by one unit to the right in the horizontal direction and six units down in the vertical one. Now to rise in the third dimension to change from one array to the one above to continue the construction of the 81 codewords of the corresponding hypertorus we use the vector (0 1 1 1) as the corresponding generator vector. Hence to generate all the 81 codewords of a cross-section (hypertorus) we must use the generator vectors (0 0 1 6) and (0 1 1 1). Finally to rise in the fourth dimension to change from one cross-section to the other above to continue the construction of the corresponding 729 codewords we use the vector (1 0 0 2) as the corresponding generator vector. Consequently, by using these vectors, we can construct the corresponding 729 codewords from the 9^4 hypercubic lattice and the 9^3 cross-sections can be labeled by the numbers 0, 1, 2, 3, 4, 5, 6, 7 and 8.
By knowing these generator vectors, we obtain the following result.
The classical single-error-correcting code in the 9^4 hypercubic lattice can be characterized as a lattice code.
By following the reasoning of Theorem <ref>, the authors in <cit.> provide for q=9 the 2× 2 matrix (
[ 1 6; -1 3; ]) whose line vectors generate the 9 codewords of each 9× 9 array from the 9^3 cross-sections (hypertorus). Therefore, since the vectors (0 1 1 1) and (1 0 0 2) are other two generator vectors of the classical single-error-correcting code from the 9^4 hypercubic lattice, we complete this matrix with these generators to obtain the 4× 4 matrix A=(
[ 0 0 1 6; 0 0 -1 3; 0 1 1 1; 1 0 0 2; ]) to the dimension 4 that, consequently, generates the sublattice A ℤ^4 from ℤ^4. Therefore, since | A | =9, V(A ℤ^4)V(ℤ^4) = | A | = 9.
As V(ℤ^4)=1, then V(A ℤ^4)=9, that is, the volume of the fundamental region of the sublattice A ℤ^4 of the lattice ℤ^4 is 9. From there, it is possible to observe that V(9 ℤ^4)V(ℤ^4)=9^4>9=V(A ℤ^4)V(ℤ^4) and, thus, V(9 ℤ^4)=9^4>9=V(A ℤ^4).
Hereupon it is possible to show that A ℤ^4⊃ 9 ℤ^4, in fact: let (9α,9β,9γ,9δ), where α,β,γ,δ∈ℤ, an arbitrary element of 9 ℤ^4. Then it is needed to show that there exists v=(x,y,z,w)∈ℤ^4 such that v· A=(9α,9β,9γ,9δ). Thenceforth, by straightforward computations, we obtain the solution x=2α-4β+3γ+δ, y=2α+5β-6γ+δ, z=9β and w=9α. Consequently, there exists v=(x,y,z,w)∈ℤ^4 such that v· A=(9α,9β,9γ,9δ) and, therefore, A ℤ^4⊃ 9 ℤ^4.
Now since V(9 ℤ^4)=9^4>9=V(A ℤ^4) and A ℤ^4⊃ 9 ℤ^4, then we have the following nested lattice chain
ℤ^4⊃ A ℤ^4⊃ 9 ℤ^4.
From <cit.> we obtain |ℤ^4 / 9 ℤ^4| =V(9 ℤ^4)V(ℤ^4)=9^4 and |ℤ^4 / A ℤ^4|=V(A ℤ^4)V(ℤ^4)=9, consequently, | A ℤ^4 / 9 ℤ^4|=V(9 ℤ^4)V(A ℤ^4)=9^49=9^3=729. Therefore the lattice quotient A ℤ^4 / 9 ℤ^4 is the group consisting of the 729 cosets of 9 ℤ^4 in A ℤ^4 and the set of these 729 cosets defines a lattice code.
Now as the line vectors of the matrix A which generates the sublattice A ℤ^4 are the generators of the classical code from the 9^4 hypercubic lattice and the lattice quotient A ℤ^4 / 9 ℤ^4 provides operations modulo 9 over the respective vectors, then there exists a natural group isomorphism between the classical code and the lattice code A ℤ^4 / 9 ℤ^4, consequently, such a classical code is characterized by the lattice code A ℤ^4 / 9 ℤ^4.
From this lattice code a new four-dimensional toric quantum code can be featured; in fact, the corresponding four-dimensional toric quantum code which is associated with the fundamental region as being the Lee sphere of radius 1 in 4 dimensions is established in the same way as it is in the four-dimensional toric quantum code for q=9 studied in <cit.>. Thenceforward, as in <cit.>, the respective code length is decreased and it is given by the number of hyperfaces of the corresponding Lee sphere of radius 1, that is, n=6q=54, since such a Lee sphere has hypervolume q=9 and each hyperface belongs simultaneously to two hypercubes of the q^4=9^4=6561 hypercubic lattice (lattice). The code dimension is k=6, since this four-dimensional toric quantum code is constructed on the T^4 torus which has genus g=1. From the fact that the ℤ^4-lattice is self-dual, the code distance is defined as the minimum number of hyperfaces in the q^4=9^4=6561 hypercubic lattice between two codewords. Such a distance is called Mannheim distance and it is given by d_M = min {| x | + | y | + | z | + | w | | (x,y,z,w) ∈𝒞}, where 𝒞 indicates the set of the 729 codewords and w_M (x,y,z,w) = | x | + | y | + | z | + | w | is known as the Mannheim weight of (x,y,z,w).
The minimum Mannheim distance of the corresponding four-dimensional toric quantum code is given by 3.
By following the reasoning of Theorem <ref>, from <cit.>, the line vectors of the 2× 2 matrix (
[ 1 6; -1 3; ]) generate the 9 codewords of each 9× 9 array from the 9^3 cross-sections (hypertorus). On the other hand, in <cit.>, Lemma 1, Lemma 2 and Example 2 provide the minimum Mannheim distance of the codewords of each 9× 9 array which is given by 3.
As we have seen previously we use the vectors (0 1 1 1) and (1 0 0 2) which are the corresponding generator in the vertical directions to rise in the third and four dimensions, respectively, to change from one array to the one above and from one cross-section to the other above, respectively, to continue the construction of the corresponding codewords.
Since the minimum Mannheim distance of the codewords of each 9× 9 array is given by 3 and the Mannheim weight of (0 1 1 1) and (1 0 0 2) is 3, then the minimum Mannheim distance of the corresponding four-dimensional toric quantum code is given by 3.
From there, the parameters of the new four-dimensional toric quantum code are [[6q=54,6,3]].
Consequently, under the algebraic point of view, such a new four-dimensional toric quantum code can be characterized as the group consisting of the cosets of 9 ℤ^4 in A ℤ^4.
As it was noted in Section <ref>, the duality of the lattice in which a toric quantum code is constructed influences the error correction pattern <cit.>, therefore, as the ℤ^4-lattice is self-dual, then the respective quantum channel without memory is symmetric.
§ NEW THREE AND FOUR-DIMENSIONAL QUANTUM BURST-ERROR-CORRECTING CODES
The authors in <cit.> provide a new class of toric quantum codes by constructing a classical cyclic code on the square lattice ℤ_q×ℤ_q for all odd integers q≥ 5 and, consequently, new toric quantum codes are constructed on such square lattices via a combinatorial method, that is, regardless of whether q can be represented as a sum of two squares. Moreover, they propose a quantum interleaving technique by using these constructed two-dimensional toric quantum codes which shows that the code rate and the coding gain of the interleaved two-dimensional toric quantum codes are better than the code rate and the coding gain of Kitaev toric quantum codes for q = 2n + 1, where n≥ 2, and of an infinite class of Bombin and Martin-Delgado toric quantum codes.
The two-dimensional quantum interleaving technique proposed in <cit.> is extended in this section to the dimensions three and four by applying it to the three and four-dimensional toric quantum codes previously obtained in Sections <ref> and <ref>, respectively. As a consequence, new three and four-dimensional quantum burst-error correcting codes are obtained which have better information rates and in the end of this section the appropriate comparisons are discussed and presented in tables. The following Theorem provides the parameters of these new three and four-dimensional quantum burst-error correcting codes.
Consider n_D=3,4 and q=7,9, respectively, where n_D is the dimension of the q^n_D hypercubic lattice. The combination of the three and four-dimensional toric quantum codes provided in Sections <ref> and <ref>, respectively, and the interleaving technique results respectively in three and four-dimensional interleaved toric quantum codes with parameters [[3q^3,3q^2,t_i=q]] (q=7) and [[6q^4,6q^3,t_i=q]] (q=9), respectively, where t_i is the interleaved toric quantum code error correcting capability.
The parameters of the three and four-dimensional toric quantum codes presented in Sections <ref> and <ref>, respectively, are given respectively by [[3q=21,k=3,d_M=3]] (q=7) and [[6q=54,k=6,d_M=3]] (q=9). A toric quantum code with minimum distance d_M is able to correct up to t errors, where t=⌊d_M-1/2⌋ <cit.>, therefore, such codes are able to correct up to t=1 error.
We assume that the clusters of errors have the shape of the Lee sphere of radius 1 in n_D dimensions and the qubits are in a biunivocal correspondence with the faces (hyperfaces) of the q^n_D hypercubic lattice. Figure <ref> shows the model of an storage system under consideration.
Consider the q^n_D hypercubic lattice, where n_D=3,4 and q=7,9, respectively. Such a hypercubic lattice has a total of α q^n_D qubits (hyperfaces), where α=3 for q=7 and n_D=3, and α=6 for q=9 and n_D=4. From now on we explain how to spread such α q^n_D adjacent qubits in the q^n_D hypercubic lattice. Observe that the three and four-dimensional toric quantum codes presented in Sections <ref> and <ref>, respectively, in the respective q^n_D hypercubic lattice consist of q^(n_D-1) codewords whose fundamental region is the Lee sphere of radius 1 in n_D dimensions which has q hypercubes and α q qubits (hyperfaces).
Thereby the first α q^(n_D-1) adjacent qubits which are related to the cross-section j=0 are spread over the hypercubes that correspond to the q^(n_D-1) codewords of the q^n_D hypercubic lattice as it follows: firstly, ordering such adjacency consists of using the order of the codewords of the cross-section j=0 where such an order is featured by the generator vectors that are used in the generation of the codewords of each cross-section j (j=0,1,…,q-1); observe that these generator vectors are the same for generating the corresponding codewords of each cross-section j. Thenceforward, the first q^(n_D-1) qubits (hyperfaces) are placed on the hypercubes that correspond to the q^(n_D-1) codewords of the q^n_D hypercubic lattice; notice that it is needed to follow the sequence of the codewords of the respective code constructed over the q^n_D hypercubic lattice by using all the generator lattice vectors and initiating with the null lattice vector (null codeword) which belongs to the cross-section j=0. The other α-1 blocks of q^(n_D-1) qubits are placed in the same way on the same hypercubes (related to the q^(n_D-1) codewords of the code).
Each cross-section j (j=0,1,…,q-1) has q^(n_D-2) codewords and each codeword has α q qubits, then each cross-section j has α q^(n_D-1) qubits. Now the other q-1 blocks with α q^(n_D-1) adjacent qubits which correspond to the cross-sections j=1,2,…,q-1, respectively, are spread across the q^n_D hypercubic lattice analogously, that is, suppose that we wish to spread the next block (j=1) with α q^(n_D-1) adjacent qubits (hyperfaces) across the q^n_D hypercubic lattice. As the corresponding fundamental region is the Lee sphere of radius 1 in n_D dimensions which has q hypercubes by including the one related to the codeword, then we start the corresponding arrangement by placing the first qubit of the q^(n_D-1) first qubits in one of the q-1 hypercubes related to the Lee sphere of radius 1 of the null codeword (the hypercube related to the null codeword was used to spread the qubits of the cross-section j=0) and, then, we use all the generator lattice vectors to follow the sequence of the respective codewords to spread the other q^(n_D-1)-1 qubits over the corresponding hypercubes. The other α-1 blocks of q^(n_D-1) qubits are placed in the same way on the same hypercubes.
Consequently, the α q^(n_D-1) qubits of each cross-section j=2,…,q-1 are spread analogously and correspond to a hypercube of the q-2 remaining hypercubes of the Lee sphere of radius 1. Therefore such an interleaving technique shows that the qubits of a certain hypercube of the q^(n_D-1) codewords of the q^n_D hypercubic lattice are the α q^(n_D-1) qubits of the codewords of a cross-section j (j=0,1,…,q-1).
Since the three and four-dimensional toric quantum codes presented in Sections <ref> and <ref>, respectively, are able to correct up to t=1 error, then by using the interleaving method previously described it is possible to correct up to q errors in burst from the total of α q^(n_D) qubits. In fact, assume that a cluster of q errors in the quantum channel has the shape of the corresponding fundamental region which is the Lee sphere of radius 1. Therefore, when the deinterleaving process is applied, each one of these q errors occurs, respectively, in a codeword of a different cross-section j (j=0,1,…,q-1) and then the respective three and four-dimensional toric quantum codes are applied to correct these q errors in burst.
Therefore, in this section, it is shown that the combination of the three and four-dimensional toric quantum codes provided in Sections <ref> and <ref>, respectively, whose parameters are [[3q=21,k=3,d_M=3]] (q=7) and [[6q=54,k=6,d_M=3]] (q=9), respectively, and the presented interleaving method results, respectively, in three and four-dimensional interleaved toric quantum codes whose parameters are given respectively by [[3q^3,3q^2,t_i=q]] (q=7) and [[6q^4,6q^3,t_i=q]] (q=9), where t_i is the interleaved toric quantum code error correcting capability. As a consequence, new three and four-dimensional quantum burst-error correcting codes are obtained by applying this interleaving method to the respective three and four-dimensional toric quantum codes.
The code rate <cit.> and the coding gain <cit.> are given by R=kn and G=kn(t+1), respectively, where t is the toric quantum code error correcting capability.
Table <ref> shows that the code rate and the coding gain (in dB) of the three and four-dimensional toric quantum codes provided in Sections <ref> and <ref>, respectively, are better than the code rate and the coding gain of the three and four-dimensional toric quantum codes from the literature whose parameters are [[3q^3,3,t=3]] (q=7) <cit.> and [[6q^4,6,t=40]] (q=9) <cit.>, respectively.
Table <ref> shows the equivalent interleaved three and four-dimensional toric quantum codes (three and four-dimensional quantum burst-error-correcting codes) and their corresponding interleaving code rates R_i and coding gains G_i (in dB) when the cluster of errors has the shape of the corresponding Lee sphere of radius 1 over the q^n_D hypercubic lattice. Notice that the coding gain of the three and four-dimensional quantum burst-error correcting codes is better than the coding gain of the three and four-dimensional toric quantum codes provided in Sections <ref> and <ref>, respectively, and the code rate of the three and four-dimensional quantum burst-error correcting codes is equal to the code rate of the three and four-dimensional toric quantum codes provided in Sections <ref> and <ref>, respectively.
Moreover, the code rate and the coding gain (in dB) of the three and four-dimensional quantum burst-error correcting codes are better than the code rate and the coding gain of the three and four-dimensional toric quantum codes from the literature whose parameters are [[3q^3,3,t=3]] (q=7) <cit.> and [[6q^4,6,t=40]] (q=9) <cit.>, respectively.
§ CONCLUSION
References <cit.> provide effective interleaving techniques for combating burst of errors by using classical codes. However, to the best of our knowledge, little is known regarding interleaving techniques for combating cluster of errors in toric quantum codes. While in classical interleaving <cit.> each face represents a bit and, therefore, the bits sequentially allocated to each face are shifted according to the generator of the group, and so on, in the quantum interleaving used in this work the use of this sight to qubits which are allocated in each fundamental region is presented consistently. In addition, in the classical interleaving techniques <cit.> several classical error-correcting codes are needed to correct the corresponding errors in burst; now, in this work, we use a quantum interleaving method which provides a single quantum burst-error-correcting code in dimensions three and four that has better information rates than those three and four-dimensional toric quantum codes from the literature.
§ ACKNOWLEDGMENTS
The authors would like to thank the financial Brazilian agency CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico - Brazil) for the funding support and under grant no. 101862/2022-9 (chamada CNPq 25/2021, PDS 2021).
99
Kitaev1
A. Yu. Kitaev, “Quantum error correction with imperfect gates,” Quantum Communication, Computing and Measurement, edited by Hirota et al., Plenum Press, New York, pp. 181–188, 1997.
Kitaev
A. Yu. Kitaev, “Fault-tolerant quantum computation by anyons,” Annals of Physics, 303(2), 2003.
Wilde
M. M. Wilde, M. H. Hsieh and Z. Babar “Entanglement-assisted quantum turbo codes,” IEEE Transactions on Information Theory, vol. 60, no. 2, pp. 1203–1222, 2014.
Kawabata
S. Kawabata “Quantum interleaver: quantum error correction for burst error,” J. Phys. Soc. Jpn., vol. 69, no. 11, pp. 3540–3543, 2000.
Werner
D. Kretschmann and R. F. Werner “Quantum channels with memory,” Phys. Rev. A, vol. 72, pp. 062323, 2005.
Caruso
F. Caruso, V. Giovannetti, C. Lupo and S. Mancini “Quantum channels and memory effects,” Rev. Mod. Phys., vol. 86, no. 4, pp. 1203, 2014.
Lvovsky
A. I. Lvovsky, B. C. Sanders and W. Tittel “Optical quantum memory,” Nature Photonics, vol. 3, no. 12, pp. 706–714, 2009.
Conway J. H. Conway, N. J. A. Sloane, Sphere Packings, Lattices, and Groups, 3rd edition, Springer-Verlag, New York, 1999.
ijam1
C. C. Trinca Watanabe, J.-C. Belfiore, E. D. de Carvalho, J. Vieira Filho, R. Palazzo Jr. and R. A. Watanabe, “Construction of complex nested ideal lattices for complex-valued channel quantizattion,” Internat. J. Appl. Math., vol. 31, no. 4, pp. 549–585, 2018; DOI: 10.12732/ijam.v31i4.4
ijam2
C. C. Trinca Watanabe, J.-C. Belfiore, E. D. de Carvalho, J. Vieira Filho and R. A. Watanabe, “Construction of nested real ideal lattices for interference channel coding,” Internat. J. Appl. Math., vol. 32, no. 2, pp. 295–323, 2019; DOI: 10.12732/ijam.v32i2.11
ijam3
C. C. Trinca Watanabe, J.-C. Belfiore, E. D. de Carvalho and J. Vieira Filho, “E_8-Lattice via the cyclotomic field ℚ(ξ_24),” Internat. J. Appl. Math. vol. 31, no. 1, pp. 63–71, 2018; DOI: 10.12732/ijam.v31i1.6
ClariceArtigo
C. D. de Albuquerque, R. Palazzo Jr. and E. B. Silva “Construction of new toric quantum codes,” Contemp. Math., vol. 518, pp. 1–10, 2010.
ClariceQIP
C. D. de Albuquerque, G. G. La Guardia, R. Palazzo Jr., C. R. O. Q. Queiroz and V. L. Vieira “Euclidean and hyperbolic asymmetric topological quantum codes,” Quantum Information Processing, vol. 21, 153, 2022.
CibeleQIP
C. C. Trinca, J. Carmelo Interlando, R. Palazzo Jr., A. A. de Andrade and R. A. Watanabe “On the construction of new toric quantum codes and quantum burst-error-correcting codes,” Quantum Information Processing, vol. 22, no. 5, 2023.
forney
G. D. Forney “Coset Codes - Part I: Introduction and Geometrical Classification,” IEEE Transactions on Information Theory, vol. 34, no. 5, pp. 1123–1151, 1998.
Shor P. W. Shor, “Fault-tolerant quantum computation,” Proceedings of the 37th Annual Symposium on Foundations of Computer Science, vol. 56, 1996.
Gott D. Gottesman, “Class of quantum error-correcting codes saturating the quantum Hamming bound,” Phys. Rev. A, vol. 54, 1996.
DennisKitaev
E. Dennis, A. Yu. Kitaev, A. Landahl and J. Preskill, “Topological quantum memory,” J. Mathematical Physics, vol. 43, 4452, 2002.
Temperature
C. Castelnovo and C. Chamon. “Topological order in a three-dimensional toric code at finite temperature,” Phys. Rev. B, vol. 78, pp. 155120, 2018.
Edson
E. D. de Carvalho, Soares Jr., W. S. and E. B. da Silva. “Topological Quantum Codes from Lattice Partitions on the n-Dimensional Flat Tori,” Entropy, vol. 23, pp. 959, 2021.
perfcodes
S. W. Golomb and L. R. Welch. “Perfect codes in the lee metric and the packing of polyominoes,” SIAM Journal on Applied Mathematics, vol. 18, pp. 302–317, 1970.
ijam
C. C. Trinca and R. Palazzo Jr. “On the construction of perfect codes for n-dimensional interleaving,” International Journal of Applied Mathematics, vol. 34, no. 3, pp. 485–506, 2021, 10.12732/ijam.v34i3.5
BombinDelgado
H. Bombin and M. A. Martin-Delgado, Homological error correction: classical and quantum codes, J. Math. Phys., 48, 2007.
LivroBombin
H. Bombin “Topological codes. Quantum Error Correction,” Cambridge University Press, vol. 1, pp. 455–481, 2013.
4D
N. P. Breuckmann, K. Duivenvoorden, D. Michels and B. M. Terhal. “Local decoders for the 2D and 4D toric code,” Quantum Inf. Comput., vol. 17, pp. 181–208, 2017.
rates
G. C. Clark Jr. and J. B. Cain “Error-Correction Coding for Digital Communications,” New York: Plenum Press, 1981.
celso
C. de Almeida and R. Palazzo Jr. “Efficient two-dimensional interleaving technique by the use of the set partitioning concept,” Electronics Letters, vol. 32, no. 6, pp. 538–540, 1996.
|
http://arxiv.org/abs/2307.04296v1 | 20230710012648 | K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality Assessment | [
"Jinbao Wang",
"Guoyang Xie",
"Yawen Huang",
"Jiayi Lyu",
"Feng Zheng",
"Yefeng Zheng",
"Yaochu Jin"
] | eess.IV | [
"eess.IV",
"cs.CV"
] |
Journal of Class Files, Vol. 18, No. 9, September 2020
How to Use the IEEEtran Templates
K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality Assessment
Jinbao Wang^1, Member, IEEE,
Guoyang Xie^1,
Yawen Huang^1,
Jiayi Lyu,
Feng Zheng, Member, IEEE,
Yefeng Zheng, Fellow, IEEE, and
Yaochu Jin, Fellow, IEEE
Jinbao Wang, Jiaqi Liu and Feng Zheng are with the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China (e-mail: [email protected]; [email protected]; [email protected])
Guoyang Xie is with the Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China and is also with the Department of Computer Science, University of Surrey, Guildford GU2 7YX, United Kingdom (e-mail: [email protected])
Yawen Huang and Yefeng Zheng are with Tencent Jarvis Lab, Shenzhen 518040, China (e-mail: [email protected]; [email protected]).
Jiayi Lyu is with the School of Engineering Science, University of Chinese Academy of Sciences, Beijing, China (e-mail: [email protected])
Yaochu Jin is with the Faculty of Technology, Bielefeld University, 33619 Bielefeld, Germany and also with the Department of Computer Science and Engineering, University of Surrey, Guildford GU2 7YX, United Kingdom (e-mail: [email protected])
^1Contributed Equally.
August 12, 2023
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The problem of how to assess cross-modality medical image synthesis has been largely unexplored. The most used measures like PSNR and SSIM focus on analyzing the structural features but neglect the crucial lesion location and fundamental k-space speciality of medical images. To overcome this problem, we propose a new metric K-CROSS to spur progress on this challenging problem. Specifically, K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location, together with a tumor encoder for representing features, such as texture details and brightness intensities. To further reflect the frequency-specific information from the magnetic resonance imaging principles, both k-space features and vision features are obtained and employed in our comprehensive encoders with a frequency reconstruction penalty. The structure-shared encoders are designed and constrained with a similarity loss to capture the intrinsic common structural information for both modalities. As a consequence, the features learned from lesion regions, k-space, and anatomical structures are all captured, which serve as our quality evaluators. We evaluate the performance by constructing a large-scale cross-modality neuroimaging perceptual similarity (NIRPS) dataset with 6,000 radiologist judgments. Extensive experiments demonstrate that the proposed method outperforms other metrics, especially in comparison with the radiologists on NIRPS.
Medical image, quality assessment, synthesized neuroimages, k-space
§ INTRODUCTION
PSNR <cit.>, SSIM <cit.>, and MAE <cit.> are the most commonly used evaluation metrics in cross-modality magnetic resonance imaging (MRI) synthesis works. However, these metrics are inappropriate to a certain degree, considering that they are based on natural images and naturally ignore the inherent properties of MRI data. In general, the quality of neuroimage can be assessed by the content (i.e., lesion region), frequency space, and structure details. Although MAE, PSNR and SSIM are effective in assessing image quality, they are ineffective as a neuroimage metric, because they only focus on the structural details in the pixel space. Therefore, it is important to find a new way to measure how good the cross-modality neuroimage synthesis is.
Empirically, the content details of neuroimages, particularly the texture and brightness, are disregarded by either PSNR or SSIM. Instead, radiologists pay more attention to the lesion regions, since the usefulness of analyzing pathology and human cognitive functions. The purpose of K-CROSS is to fully reflect the lesion region by introducing a cross-modality neuroimage segmentation network which has already been trained to precisely forecast the tumor location. The prediction mask (i.e., tumor region) is fed into the proposed tumor encoder to extract features. The proposed tumor loss function improves the extracted feature to capture more essential texture details and brightness information. In Fig. <ref> (A), we can observe that the content of the synthesis neuroimage does not align with the target modality neuroimage. Though PSNR and SSIM scores are the highest for the synthesized ones, they only evaluate the structure details without taking the content into account. By contrast, K-CROSS is reliable in exploring neuroimaging perceptual similarity (NIRPS) for the synthesized results.
Besides, PSNR and SSIM are unable to account for differences in the k-space between the synthesized images and the target modality data, whereas K-CROSS can. The fundamental difference between MRI and natural images, as seen from the standpoint of imaging principles, is the basis of MR image reconstruction. The Fourier transformation, often known as the "k-space" in MRI, is a mathematical concept that calculates various frequencies mixed into the received signal of all spins. It forms the basis for all image reconstruction in MRI. Therefore, we believe that the proposed metric can estimate the distance between MRIs in both k-space and pixel space. In k-space, the sophisticated K-CROSS encoder can capture the invariant modal-specific feature, where the frequency loss can be used to further enhance the complicated encoder. When a k-space shift occurs, as seen at the bottom of Fig. <ref> (B), K-CROSS is more stable in accordance with the radiologist’s score, which is able to measure the gap in k-space between the synthesized neuroimage and the corresponding ground truth.
To constrain the structural features that are extracted by the shared structure encoder from both the source modality and the target modality, we set up a cross-modality similarity loss function, as the entire structure information between the source and the target modality neuroimaging data is very similar. PSNR and SSIM, on the other hand, only assess the input image, which limits their capacity to recognize the structural details that the source modality and the target modality share.
Our contributions can be summarized as follows:
* We propose a new metric, called K-CROSS, to evaluate the quality of the synthetic data based on all the structural information, k-space feature shift, and lesion area. This multidimensional quantification indication enables K-CROSS to achieve more precise results than other metrics that only consider natural images.
* To properly verify the effectiveness of our K-CROSS, we construct a large-scale and multi-modal neuroimaging perceptual similarity (NIRPS) dataset, which includes 6,000 assessments from radiologists.
* K-CROSS achieves highly competitive results based on the judgments from radiologists on NIRPS, which can be treated as a general evaluation metric for various purposes of medical image synthesis.
The rest of this paper is organized as follows: Section <ref> presents a literature review on image quality assessment and GAN-based assessment methods. Section <ref> explains the proposed algorithm K-CROSS in detail. In addition, a large-scale multi-modal neuroimaging perceptual similarity (NIRPS) dataset is constructed in Section <ref>. Section <ref> presents comprehensive experimental evaluations while Section V draws the conclusion and limitation of the current work.
§ RELATED WORK
§.§ Image Quality Assessment
Image quality assessment (IQA) can be divided into two categories. One is fully referenced IQA, and the other is non-referenced IQA <cit.>. IQA with all references refers to estimating the quality of natural images with references: SSIM <cit.>, MS-SSIM <cit.> and FSIM <cit.>, focus more on image structure specifics. Specifically, FSIM builds up a novel feature similarity index according to the phase congruence and image gradient magnitude, while PSNR focuses on edge estimation for the synthesized images. Most of them <cit.> use low-level features for evaluation. LPIPS <cit.> is the first work that uses a high-level feature for fully referenced IQA in light of the popularity of deep learning. Estimating the synthesized image quality without a reference (ground truth) is known as non-referenced IQA <cit.>. RankIQA <cit.> is the mainstream for non-referenced IQA. Considering the limited size of IQA, Liu et al. propose a Siamese network to rank images and their distorted ones. The Siamese network's knowledge (ranking result) can be transferred to a conventional neural network, whose function is to assess the quality of a single image. Since K-CROSS requires a reference image for evaluation, it belongs to a fully referenced IQA. However, few public data in the medical imaging community could be used to train for the learning-based fully referenced IQA methods. The NIRPS dataset, the first extensive neuroimaging perceptual similarity dataset with radiologists' labels is constructed. As for fully referenced IQA methods, K-CROSS is, therefore, able to use the supervised training methods.
§.§ GAN Assessment
The existing sample-based methods <cit.> have been proposed to access GAN performance, like Kernel MMD <cit.>, Inception Score <cit.>, Mode Score <cit.> and FID <cit.>. The classical approach is to compare the log-likelhood of generative models. But this approach cannnot accurately indicate the quality of synthesized image. In other words, a model can achieve high likelihood, but low image quality, and conversely, low image quality, and conversely. As for Inception score <cit.>, it computes the KL divergence between the conditional class distribution and the marginal class distribution over the generated data. However, IS does not capture intra-class diversity, which is insensitive to the prior distribution over labels. Among them, the most popular metric is FID. Heusel et al. <cit.> use InceptionV3 <cit.> to extract the features from the real and synthetic neuroimaging data, and then compute the differences in the features between them. However, the majority of them are created in the pixel space and ignore the lesion region and k-space, which are the fundamental elements of MR image properties. In this regard, K-CROSS considers the underlying MR imaging principle as well as the difference between the neuroimages of the source and target modality.
§ PROPOSED METHOD
§.§ Preliminary
§.§.§ K-Space Representations
The spatial frequencies of an MR picture are represented in k-space by a matrix of numbers. Despite MR images and k-space having the identical dimension, in practice, each point (k_x, k_y) in k-space represents the spatial frequency and phase information about each pixel in the MR image rather than corresponding to a specific pixel value. By contrast, every pixel in the MR image maps to a point in k-space. As a result, we transform MRI into k-space using the 2D discrete Fourier transform:
F (u, v) = ∑_x=0^M-1∑_y=0^N-1 f(x, y) e^-i2π (ux/M + vy/N),
where the MR image size is M × N, (x,y) is the MRI's pixel coordinate, (u,v) is its spatial coordinate in k-space, F(u,v) is its complex frequency value, and e and i stand for the Euler's number and the imaginary unit, respectively. We concentrate on the real and imaginary components of F(u,v). According to (<ref>), we rewrite F(u, v) as follows:
F(u, v) = R(u, v) + I(u, v)i = a + bi,
where the imaginary and real parts of F(u, v) are I(u, v) and R(u, v)= a, respectively. Furthermore, we introduce two key k-space concepts. Here, the amplitude can be defined as:
| F(u, v) | = √(R(u,v)^2 + I(u,v)^2) = √(a^2 + b^2).
The amplitude is a measure of how strongly a 2D wave reacts to an MR image. We typically visualize k-space using the amplitude.
∠ F(u,v) = arctan ( I(u,v)/R(u,v) ) = arctan ( b/a).
The peak shit distance between two 2D sinusoidal waves of the same frequency is referred to as a phase. The phase is the second concept, which is defined in (<ref>).
§.§.§ Complex Convolution
The complex-valued convolution <cit.> is different from the real-valued convolution. Given a complex-valued convolution filter W = A + iB with real-valued matrices A and B. The operation is expressed as follows:
W∗h = (A∗x - B∗y) + i(B∗x + A∗y).
The visualization can be found in Fig. <ref>.
§.§.§ Complex Leaky RELU
It applies separate Leaky RELUs <cit.> on both the real part R(z) and the imaginary part Im(z) of a complex-valued, which is defined as:
ℂLeakyRELU = LRELU(R(z)) + i ∗ LRELU(Im(z)),
§.§.§ Complex RELU
It applies separate RELUs <cit.> on both the real part R(z) and the imaginary part Im(z) of a complex-valued, which is defined as:
ℂRELU = RELU(R(z)) + i ∗ RELU(Im(z)).
The visualization is given in Fig. <ref>.
§.§.§ Complex Tanh
Complex Tanh applies separate tanh activation <cit.> on both the real part R(z) and imaginary part Im(z) of a complex-valued, which is defined as:
ℂTanh = Tanh(R(z)) + i ∗ Tanh(Im(z)).
§.§.§ Complex BatchNorm
As described in Cogswell et al. <cit.>, complex-valued batch normalization could be separately applied into the imaginary part and real part, which could reduce the risk of over-fitting. The detail operation is defined as:
ℂBN = BN(R(z)) + i ∗ BN(Im(z)).
§.§.§ Complex Upsample
complex-valued upsample algorithm is able to be separately applied to the real part and imaginary part, which is defined as:
ℂUpsample = Upsample(R(z)) + i ∗ Upsample(Im(z)).
§.§ Architecture
§.§.§ Complex Branch
A more refined U-Net architecture implemented in k-space makes up the proposed complex encoder. Specifically, each downsampling block in the encoding stage includes complex convolution ℂBN and ℂLeakyRELU. The complex convolution is replaced with the complex transposed convolution for up-sampling during the decoding phase. There is a complex transposed convolution, ℂBN, and ℂRELU in each upsampling block. We apply ℂUpsample, complex convolution and ℂTanh to reconstruct the images in the final layer of the decoding stage. Fig. <ref> shows the complex branch architecture in detail.
§.§.§ Tumor and Structure Branch
As a cross-modality segmentation neural network, the well-trained nnU-Net <cit.> is used in K-CROSS, with the weights being adjusted in the second stage of training. The modality-specific tumor encoder and decoder are private because the tumor information (the texture details and brightness) from the source modality and the target modality differ. With the exception of the operators using the normal convolution, batch norm, and Leaky RELU, the architecture details of the tumor encoder-decoder and the structure encoder-decoder are similar to those of Fig. <ref>.
§.§.§ Score Network and Quality Prediction Regressor
We construct a two-layer MLP for quality prediction, considering that the regressor simply maps the output vectors of the triple-path decoder to labeled quality scores. The network is made up of two fully connected layers with 512-256 and 256-1 channels. The complex score network is composed of two complex fully interconnected layers. Its structure is similar to that shown in Fig. <ref>. However, the operator of the complex score network substitutes MLP layers for ℂRELU. The natural score network has two fully connected layers as well. The channels of the complex score network and the natural score network are 512-256 and 256-1, respectively. The regressor is trained by using the L_1 loss function.
§.§ Loss Function
§.§.§ Frequency Loss
Directly measuring the distance between two complex vectors is very difficult. Alternatively, recent works are more concerned with the image amplitude. We discover that without the phase information from Fig. <ref>, which is impossible to reconstruct the entire neuroimage.
Our solution is based on the focal frequency loss <cit.>, as shown in Fig. <ref>.
The hidden k-space of real MRI is F_r (u, v) = a_r + b_ri, and the corresponding k-space of synthesis MRI is F_f (u, v) = a_f + b_fi. To calculate their distance, we map F_r and F_f into the Euclidean space as v⃗_⃗r⃗ and v⃗_⃗f⃗. Specifically, the lengths of v⃗_⃗r⃗ and v⃗_⃗f⃗ are the amplitudes of F_r and F_f, respectively. And the angles θ_r and θ_f correspond to the phases of F_r and F_f, respectively. As a result, the distance between F_r and F_f can be converted to the distance between v⃗_⃗r⃗ and v⃗_⃗f⃗ (termed as d(v⃗_⃗r⃗, v⃗_⃗f⃗)), which is defined as follows:
d(F_r, F_i) = d(v⃗_⃗r⃗, v⃗_⃗f⃗) = v⃗_⃗r⃗ - v⃗_⃗f⃗^2.
The complex feature maps are extracted from each layer l of the encoder in the complex U-Net. Each pixel of the complex feature maps for each layer is denoted as m^l∈ℝ^H_l× W_l× C_l. Finally, we compute spatial and channel averages. As a result, the frequency loss for a complex U-Net is defined as follows:
ℒ_freq(m^l_r, m^l_f) = ∑_l1/H_lW_l∑_h, wv⃗_⃗r⃗ - v⃗_⃗f⃗^2.
§.§.§ Similarity Loss
For the similarity loss ℒ_simi, K-CROSS uses the maximum mean discrepancy (MMD) loss <cit.> to measure it. That is, K-CROSS computes the squared population MMD between shared structure encoding of the source modality h^s_c and the target modality h^s_t using a biased statistic. We express this as:
ℒ_simi = 1/(N^s)^2∑_i,j=0^N^sκ(h^s_c_i, h^s_c_j)
- 2/N^sN^t∑_i,j=0^N^s, N^tκ(h^s_c_i, h^s_c_j)
+ 1/(N^t)^2∑_i,j=0^N^tκ(h^s_c_i, h^s_c_j),
where κ is a linear combination of multiple RBF kernels: κ(x_i, x_j) = ∑_nη_nexp{ - 1/2σ x_i - x_j^2}, where σ_n is the standard deviation and η_n is the weight for n-th RBF kernel. The similarity loss function encourages the shared structure encoder to learn the invariant structure feature irrespective of the modality.
§.§.§ Tumor Loss
The tumor loss function consists of a Laplacian loss function ℒ_lap and the LPIPS loss function ℒ_lpips <cit.>. The Laplacian loss function is defined as :
ℒ_lap = 𝔼 L(x) - L(x̂)^2_2.
The LPIPS loss function is defined as:
ℒ_lpips = ∑_kτ^k (ϕ^k(x) - ϕ^k(x̂)).
So the tumor loss is described below:
ℒ_tumor = λ_lapℒ_lap + λ_lpipsℒ_lpips.
In (<ref>), ϕ(·) represents the feature extractor and τ(·) computes the feature score from the k-th layer of the backbone architecture. As a result, the LPIPS value is the average score of all backbone layers. To compute the LPIPS loss, we used a well-trained VGG <cit.> network. The Laplacian loss is used to identify the tumor region's high-frequency component. Due to LPIPS loss, the real tumor region and the reconstructed tumor region are more similar, which is more consistent with the radiologist's judgment.
§.§.§ Structure Loss
We employ L_1 loss function to extract meaningful semantic structure features, where the structure loss function is defined as:
ℒ_stru = ||x - x̂||_1.
§.§.§ Inconsistency Loss
We adopt the MSE loss function to optimize the weights of complex score network n_c and natural score network n_nat, where the inconsistency loss is defined as:
ℒ_inc = ||η_total - η_ra||_1,
where the score of K-CROSS η_total is aligned with the scale of the radiologist's rating score η_ra via our proposed ranking algorithm. The details of the ranking algorithm can be found in Algorithm <ref>.
§.§.§ Total Loss
For the first stage, the loss function is described below:
ℒ_first = λ_1ℒ_tumor + λ_2ℒ_stru + λ_3ℒ_freq + λ_4ℒ_sim.
In the second stage, we optimize the parameters of the complex score network and the natural score network via
ℒ_second = ℒ_inc.
In this work, all weights of λ are set to 1.
§.§ Algorithms
The two stages of training K-CROSS are depicted in Fig <ref>. The details of two-stage training algorithms and the inference algorithm are described in Algorithm <ref>, Algorithm <ref> and Algorithm <ref>, respectively. For clarity, Table <ref> provides notation descriptions that occurred in our algorithms.
§ NIRPS DATASET AND RADIOLOGIST SCORE
To comprehensively evaluate the synthesis performance, we construct a large-scale multi-modal neuroimaging perceptual similarity (NIRPS) dataset with 6,000 radiologist judgments. NIRPS dataset is composed of three subsets generated by CycleGAN <cit.>, MUNIT <cit.> and UNIT <cit.>. Each set contains 800 images generated by IXI and 1,200 images generated by BraTS. The IXI dataset includes two modalities, PD and T2, while the BraTS dataset includes three modalities, T1, T2, and FLAIR. In both the IXI and BraTS datasets, we randomly select 10 slices for training and collect the training results after each epoch of the model trained over 40 epochs.
IXI <cit.> collects nearly 600 MR images from normal and healthy subjects at three hospitals. The MR image acquisition protocol for each subject includes T1, T2, PD-weighted images (PD), MRA images, and Diffusion-weighted images. In this paper, we only use T1 (581 cases), T2 (578 cases) and PD (578 cases) data to conduct our experiments, and select the paired data with the same ID from the three modes. The image has a non-uniform length on the z-axis with the size of 256 256 on the x-axis and y-axis. The IXI dataset is not divided into a training set and a test set. Therefore, we randomly split the whole data as the training set (80%) and the test set (20%).
BraTS2021 <cit.> is designed for brain disease analysis and diagnosis. The dataset of multi-institutional and pre-operative MRI sequences is made publicly available, and it includes both training data (1251 cases) and validation data (219 cases). Each 3D volume is 155×240×240 in size and is imaged by four sequences: T1, T2, T1ce, and FLAIR.
Training Data Processing
To ensure data validity and diversity, we remove their skulls for each slice, by splitting the three-dimensional volume and choosing slices ranging from 50 to 80 on the z-axis. All images are cropped to 256 256 pixels in size. During the training stage, we choose a total of 10k images from the IXI and BraTS2021 datasets.
§.§.§ Radiologist Score
The NIRPS dataset contains radiologist scores (RS) resulting from manual annotation for each image. It is worth noting that the radiologist score RS includes 10 levels, i.e., RS ∈ [0, 0.1, 0.2, .., 0.9]. The higher RS value indicates better-synthesized neuroimage quality. The radiologists give scores in accordance with the level of diagnosis and therapy by using the synthesized neuroimage. Fig. <ref> gives the distribution result of RS. We can see that synthesized performance varies among the three models and the average RS is in the middle.
§.§.§ How Radiologists Assess?
We prepare the real paired modalities neuroimage dataset M in advance. M consists of source modalities M_s and target modalities M_t. We generate the synthesized target modality neuroimages M̂_t via feeding M_s into the generative model, i.e., CycleGAN, MUNIT and UNIT in NIPRS. Then radiologist gives the score for M̂_t according to the comparison with M̂_t. For instance, we have paired ground-truth modality datasets, T1 and T2. As shown in Fig. <ref>, we synthesized the fake T2 by feeding T1 into the MUNIT model. The radiologists make direct comparisons between fake T2 and real T2 and give their score for the synthesised quality of T2.
§.§.§ How Radiologists Combine Their Evaluations?
We hire 10 radiologists to evaluate the quality of each synthesized neuroimage. We remove the highest score and the lowest score from all radiologists. Then the final score is averaged by the rest score from 8 radiologists.
§ EXPERIMENT AND ABLATION STUDY
§.§ K-CROSS vs Other Metrics
Table <ref> illustrates the inconsistency between metrics and human evaluations of several datasets and generative models, with the highest performance shown in red. The calculation method for inconsistency value is given in Algorithm <ref>. We evaluate K-CROSS on datasets created by CycleGAN, MUNIT, and UNIT. The first column indicates various IQA methods. The second column indicates which datasets were used to train the K-CROSS model, including IXI or BraTS. From Table <ref>, our proposed K-CROSS is more compatible with the assessments of radiologists. Note that the IXI dataset is a healthy person dataset. There is no lesion for each neuroimage. So K-CROSS only use the tumor branch and complex branch to assess the quality of neuroimage. The details are described in Section <ref>.
§.§ K-Space Importance
Table <ref> records the ablation study of individual branches (the complex branch, tumor branch and structure branch) on various datasets. For instance, when we conduct the ablation study of the complex branch, K-CROSS remove the tumor branch and structure branch in the inference phase. In other words, K-CROSS only obtain η_complex score. It applies the same setting for the other branches. It can be clearly observed that the complex branch obtain the highest score among the three branches. It strongly indicates the importance of k-space, which reflects the inherent properties of magnetic resonance imaging principles. The second best is the tumor branch. It also verifies the effectiveness of the tumor branch for the lesion disease dataset.
§.§ Metrics for Healthy Person
Table <ref> shows K-CROSS performance that surpasses the mainstream IQA methods on the IXI healthy-person dataset. As for assessing the synthesised neuroimage of healthy persons, K-CROSS removes the tumor branch in the inference phase. Because there are no lesions on healthy person datasets. It means that K-CROSS only combines η_complex and the score of the structure encoder as the final score. From Table <ref>, it can be obviously observed that K-CROSS complex branch score η_complex (blue value) has surpassed the other IQA methods, which identify the importance of k-space for MRI of healthy persons. Thus, K-CROSS still be able to serve as the metric for the synthesized quality of healthy person's neuroimage.
§.§ Segmentation Network Effect
Table <ref> shows K-CROSS remains stable performance even using different state-of-the-art medical segmentation models. Note that the parameters of the pre-trained segmentation network are frozen during the training phase. The first column denotes the segmentation method. We calculate the variance score of the K-CROSS value for CycleGAN, MUNIT and UNIT by using different segmentation backbone models. We find that the variance of K-CROSS performance is tiny (0.2%, 0.3%, and 0.2%). Hence, the performance of K-CROSS is not affected by the segmentation model.
§.§ General Metric? Overcoming Domain Gap
The purpose of this paper is to demonstrate that K-CROSS is capable of serving as the standard measure for MRI datasets. We conduct extensive experiments and the results are given in Table <ref>, Table <ref>, and Table <ref> to demonstrate that K-CROSS is not affected by dataset domain gap and the generative model. The training dataset is the BraTS dataset, and the test dataset is IXI. As described in Section <ref>, we remove the tumor branch score, when K-CROSS evaluates the quality of neuroimage in healthy cases. We also observe that K-CROSS averagely surpasses DIST and LPIP (SOTA for natural image) by 7.8% and 16.5%, respectively, which proves that K-CROSS is built upon the basis of MRI principle instead of only on the natural image level. From this ablation study, we demonstrate that the performance of K-CROSS (ℒ_stru+ℒ_freq) is stable across several MRI datasets, with the potential to serve as a generic measure for evaluating the quality of the synthesized MRI.
§ CONCLUSION
In this paper, we proposed a new metric K-CROSS for assessing the performance of the synthesized medical images, which is built on the magnetic resonance imaging principle. To improve the capability of reconstruction during training K-CROSS, a complex U-Net was developed. As for training a learning-based full IQA metric, we further constructed a large-scale multi-modal neuroimaging perceptual similarity (NIRPS) dataset. Experimental results indicate that K-CROSS is a useful indicator for evaluating the quality of the generated medical data.
Limitation and Negative Society Impact Our method heavily relies on deep learning-based techniques but without directly injecting the knowledge of radiologists into K-CROSS. In the future, K-CROSS need to combine causal inference methods to enhance interpretability.
§ ACKNOWLEDGMENT
This work is partially supported by the National Key R&D Program of China (Grant NO. 2022YFF1202903) and the National Natural Science Foundation of China (Grant NO. 62122035, 61972188, and 62206122). Y. Jin is supported by an Alexander von Humboldt Professorship for AI endowed by the German Federal Ministry of Education and Research.
ieee_fullname
|
http://arxiv.org/abs/2307.05713v2 | 20230711183417 | Thermodynamics of computations with absolute irreversibility, unidirectional transitions, and stochastic computation times | [
"Gonzalo Manzano",
"Gülce Kardeş",
"Édgar Roldán",
"David Wolpert"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"physics.comp-ph"
] |
Institute for Cross-Disciplinary Physics and Complex Systems (IFISC) UIB-CSIC, Mallorca, Spain
University of Colorado, Boulder, Colorado 80309, United States
Santa Fe Institute, Santa Fe, New Mexico 87501, United States
ICTP – The Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, 34151 Trieste, Italy
Santa Fe Institute, Santa Fe, New Mexico 87501, United States
ICTP – The Abdus Salam International Centre for Theoretical Physics, Strada Costiera 11, 34151 Trieste, Italy
Developing a physical theory of computation is an open challenging task at the interface of non-equilibrium thermodynamics and computer science. An important part of this task requires the examination of thermodynamic quantities in formal models of computers which execute computational tasks, such as automata or word-RAM models executing algorithms. This implies dealing with a number of difficulties such as stochastic halting times, unidirectional (possibly deterministic) transitions, and restricted initial conditions. Here, we present a framework which tackles all such difficulties by extending martingale theory of nonequilibrium thermodynamics to non-stationary Markovian dynamics with broken local detailed balance and absolute irreversibility. In doing so, we derive universal fluctuation relations and second-law-like inequalities that provide both lower and upper bounds for the intrinsic dissipation (mismatch cost) associated with any computation performed over arbitrary stochastic times, so long as it is implemented with a periodic process.
We then provide universal equalities and inequalities for the acceptance probability of words of a given length by a computer in terms of thermodynamic quantities, and outline connections between computer science and stochastic resetting. We illustrate most of our results with exhaustive numerical simulations of fundamental models of computation from theoretical computer science, namely, deterministic finite automata processing bit strings. Our results, while motivated from the computational context, are applicable far beyond it.
Thermodynamics of computations with absolute irreversibility,
unidirectional transitions, and stochastic computation times
David Wolpert
August 12, 2023
============================================================================================================================
§ INTRODUCTION
§.§ Background and motivation
In the last three decades there has been major progress in formulating far from equilibrium systems and processes. Using stochastic thermodynamics, we can now rigorously formulate the thermodynamic behavior of systems ranging from biological molecular machines to electronic circuits, evolving arbitrarily away from equilibrium. Celebrated results of stochastic thermodynamics include fluctuation relations that generalize the second law of thermodynamics <cit.>, speed limit theorems <cit.>, thermodynamic uncertainty relations <cit.>, large deviation approaches <cit.>, martingale fluctuation relations for extrema and stopping times <cit.>, and universal bounds on various kinetic and frenetic properties <cit.>.
The past decade also witnessed progress in thermodynamics of computation. Although many initial studies on energetic costs of computation have mostly concerned unit operations such as bit erasure <cit.>, which is too primitive to be pertinent to the formal models of computation in theoretical computer science (TCS), very recent work started to investigate energetic costs of implementing computational machines central to TCS, which perform tasks such as string matching algorithms (which are justifiably more complex than bit erasure) <cit.>. Figure <ref>(a) shows the general model of a computational machine (henceforth called a computer) which implements a basic algorithm presented in Fig. <ref>(b).
An algorithm is a finite procedure for implementing a given task, which can be executed in various physical ways, e.g., while modifying the current on electrical wires or the structure of a DNA origami.
Formally, an algorithm consists of the instructions to be performed (which is implemented by the dynamics of the computer), the local variables and the memory arrays (stored by the computer), as well as mechanisms to decide when to repeat steps and when to halt. A computer executes an algorithm on a given set of inputs, starting from a certain initial state, potentially following unidirectional transitions in its state space, and halting at an arbitrary stochastic time that depends on the computation. Hence a general thermodynamic model of computers which implement arbitrary algorithms should be able to account for the energetic costs of implementing computational processes (i) at arbitrary stopping times, with (ii) unidirectional (possibly deterministic) transitions, and (iii) “absolute irreversibility" due to the computer being initialized to a designated start state.
However, most of the central results in stochastic thermodynamics do not directly apply to processes having the aforementioned three key ingredients of computational processes (stopping times, unidirectional transitions, and absolute irreversibility). In fact, a central assumption in much of stochastic thermodynamics is the condition of local detailed balance, which requires the system to have only bidirectional transitions, i.e., all transitions between any two states i → j with their reverse j → i have a finite, non-zero probability to occur in a finite time. On the other hand, the assumption of local detailed balance can be formally avoided by taking an inclusive Hamiltonian approach <cit.>, which has been applied recently to computational machines <cit.>. However, in general there may be hidden nonequilibrium (driven) degrees of freedom which need to be included in the thermodynamic framework beyond a surrounding thermal bath. Thus, so far little is known about the stochastic thermodynamics of systems with broken local detailed balance induced by unidirectional transitions <cit.> or athermal and nonequilibrium environments <cit.>. Moreover, the recently established martingale theory of thermodynamics (see <cit.> for a review) ––which formulates fluctuation theorems and second-law-like inequalities at generic stopping times–– has not yet addressed systems with either unidirectional transitions or absolute irreversibility.
In this paper, we develop a nonequilibrium thermodynamics theory for computations with stochastic computational times that may have unidirectional transitions and absolute irreversibility.
We focus on the intrinsic thermodynamic costs of generic computations, with minimal or no details about their physical implementation. We derive fluctuation relations for key thermodynamic quantities applicable to all computational processes which can be modeled as discrete-time Markov chains (DTMC), with both unidirectional and bidirectional transitions, and restricted initial probability distributions. These relations hold at both fixed and stopping times, and so simultaneously extend the martingale theory for stochastic thermodynamics to the case of DTMCs with absolute irreversibility and unidirectional transitions. The thermodynamic meaning of our results is established by introducing a generic physical implementation of the computer as a periodically-driven process operated over a set of hidden degrees of freedom. This allow us to link dissipation from underlying (physical) to visible (computational) levels, and obtain quantifiers for the energetic costs of computations up to their halting time or between halting times of consecutive computations, and their statistics. Our results, while emphasized here for computational processes, can also describe a wide range of systems, including, e.g., biochemical processes with irreversible release of molecules.
We illustrate our results using deterministic finite automata (DFA). Loosely speaking, a DFA is a system with a finite state space, initialized to a special start state q_0 at t = 0, and a logical computer by itself, which can solve basic computational tasks such as string matching. More importantly, it constitutes the “finite logic” part [see Fig. <ref>(b), (c)] of the engineered computers at use today, and formally corresponds to the finite logic component of Turing machines (TM).
§.§ Summary of our contributions
Consider a discrete-time computational task which is implemented by a computer processing an input sequence of symbols w with length | w | = τ (e.g., τ binary numbers in a bit string) which may be produced by any type of source (physical or merely informational). We assume τ to be fixed, and given by a limit time that the computation can last. During such computation, the state of the computer evolves in a stochastic manner, tracing a stochastic trajectory on a set of computational variables 𝐱_[0,τ]=x_0,…,x_τ. The computation is accomplished at a stochastic computation time 𝒯, which is a random variable itself.
Formally, computational times 𝒯 are specific examples of stopping times. A stopping time is the first time that a stochastic trajectory meets a specific predefined criterion [See Ch. 4.1.4. in Ref. <cit.> for rigorous mathematical definitions and mathematical properties of stopping times, and examples of stopping times in physics (e.g. first-passage times).]. In this work, we deal with stopping times which associate to each specific trajectory 𝐱_[0,τ]=x_0,…,x_τ a stochastic time ≤τ that is always smaller or equal than the limit time.
We investigate the thermodynamic costs of computations with stochastic duration. We thus focus on the following thermodynamic quantity, associated with a stochastic trajectory 𝐱_[0,] that takes place in the interval [0,],
Σ (𝒯)=∑_t=0^𝒯-1[ lnρ_t(x_t)/r(x_t) - lnρ_t+1(x_t+1)/r'(x_t+1)].
In Eq. (<ref>), ρ_t(x) is the probability for the computer to be in state x at time t during the computation, whereas r(x) is an arbitrary reference probability distribution.
On the other hand, ρ_t+1(x) and r'(x) correspond to the distributions retrieved applying one iteration of the computer to ρ_t(x) and r(x) respectively. As we elaborate later, when the reference distribution minimizes the dissipation in the computer, the quantity (<ref>) can be understood as the intrinsic mismatch cost of the computation up to time , which provide us a lower bound on the entropy production incurred by any digital synchronous computer that implements it. For standard definitions of mismatch cost see e.g. Refs. <cit.>.
Some of our most important results are fluctuation relations and inequalities for the statistics of Σ(). These are three-fold universal: they are valid for all reference distributions; for all stopping rules that halt before τ+1; and for all Markovian computational processes even those with restricted initial conditions and unidirectional transitions.
As an important example, we derive the following integral fluctuation relation at stopping times
⟨ e^- Σ(𝒯) - δ_τ(𝒯)⟩ = 1 - Γ_τ,
Here and throughout the paper, we use ⟨ A(𝒯)⟩≡𝔼(A()) for the expectation of any functional A of x_[0,] over many realizations of the computation time . In Eq. (<ref>) above,
δ_τ(𝒯)= .lnρ_t(x_t)/ρ̅_τ - t(x_t)|_t=
is the stochastic distinguishability at stopping times introduced in Ref. <cit.> which is related to the system entropy at the stopping time.
The distribution ρ̅_t(x) is the probability for an auxiliary computational process to be in state x at time t during the computation, see Sec. <ref> for the formal definition of the auxiliary process. Such auxiliary computation is in general different from the computation under study, which has probability distribution ρ_t.
On the other hand, Γ_τ is a quantity introduced in this work related to the probability of absolutely irreversible trajectories at stopping times, see Eq. (<ref>).
The fluctuation relation (<ref>) is a central result of our work which implies, among other things, a second-law inequality at stopping times. In particular, the intrinsic mismatch cost averaged over many realizations of the computation time, ⟨Σ(𝒯)⟩, obeys the inequality
⟨Σ(𝒯)⟩≥ - ⟨δ_τ(𝒯) ⟩ - ln [1 - Γ_τ] .
As we show in this work, the right-hand side of Eq. (<ref>) gives a universal lower bound not only for the intrinsic mismatch cost of the computation but also for the underlying average entropy production incurred by the computer.
In particular, for the case of a stationary reference (r=π, with π'=π) ⟨Σ()⟩, equals the average (discrete-time) nonadiabatic entropy production <cit.>.
We also remark that Eq. (<ref>) follows from a stronger result, namely Eq. (<ref>), which also reveals that e^- Σ(t) - δ_τ(t) is a supermartingale process, i.e. it conditionally decreases with time, ⟨ e^- Σ(t) - δ_τ(t) | x_[0,s]⟩≤ e^- Σ(s) - δ_τ(s), where t≥ s≥ 0, and ⟨ A(t)| x_[0,s]⟩=𝔼(A(t) | x_[0,s]) is an average of functional A over all computational trajectories with the condition of x_[0,s] = x_0,…, x_s to be fixed to a given sequence.
Putting forward the martingale theory for thermodynamics <cit.>, our results can be extended to multiple, ordered stopping times. In particular, for the case of two stopping times _1 and _2 with P(_2≥_1) = 1 we obtain another central result:
⟨Σ(_2) + δ_τ(_2) ⟩≥⟨Σ(_1) + δ_τ(_1)⟩,
which entails a powerful second-law inequality applicable to both starting and ending stochastic times of computations. Among the many implications of Eq. (<ref>) to computer science, we obtain a new refinement of the second law at stopping times (i.e. a lower bound for ⟨Σ()⟩), together with an upper bound for ⟨Σ()⟩ which, taken together, lead us to a sandwich inequality for ⟨Σ()⟩
D (ρ_0 || ρ̅_τ)- ⟨δ_τ(𝒯)⟩≤⟨Σ(𝒯)⟩≤⟨Σ (τ) ⟩ -⟨δ_τ(𝒯) ⟩.
Here, D (ρ_0 || ρ̅_τ) = ∑_x ρ_0(x)ln[ρ_0(x)/ ρ̅_τ(x)]≥ 0, is the Kullback-Leibler (KL) quantifying the “distance" between the initial distribution of the computer's state ρ_0 and the distribution ρ̅_τ of a reference computation process at the limit time τ. There results put forward recent research in upper bounds and inverse thermodynamic uncertainty relations in stochastic thermodynamics, see e.g. Refs. <cit.>.
In addition, we fruitfully exploit the supermartingale property of e^- Σ(t) - δ_τ(t), together with the fluctuation relation (<ref>) to derive universal equalities and inequalities for the probability that a sequence of τ data is accepted by a computer. Moreover, considering multiple ordered stopping times _1≤_2≤_3≤…τ, we show how to extend our results to assess sequences of concatenated computations, sketching links between our formalism, computer science, and stochastic resetting <cit.>.
It is worth remarking that all our contributions, while originally motivated from problems arising in the computational context, are applicable to generic systems following Markovian evolution, where the time-homogeneous DTMC dynamics leading to the trajectories 𝐱_[0,] derives from an underlying physical continuous-time Markov chain (CTMC) that is periodic.
§.§ Roadmap
The rest of the paper is organized as follows: In Sec. <ref>, we provide the elementary concepts of our framework, including a formal definition of DFAs; the description of computational processes (such as the
running of a DFA) as Markov chains, and the physical implementation of those Markov chains. We also review the relevant thermodynamic quantities allowing us to bound energetic costs of computations. A central feature of our approach is to use mismatch cost to derive lower bounds on various thermodynamic quantities linking the physical dissipation of the computer with the corresponding dynamics over symbolic computational states.
In Sec. <ref>, we introduce an auxiliary computational process that allow us to address the thermodynamics of computations ending at a fixed time at the fluctuating level, and discuss how to incorporate explicitly the role of absolute irreversibility and unidirectional transitions, which is crucial because conventionally formulated computational machines have precisely those features.
In Sec. <ref> we present our main results for general computations starting and ending at stochastic halting times, which include fluctuation theorems and second-law inequalities that provide lower bounds to the average dissipation in a computation. The thermodynamic implications of such universal relations are discussed, together with the possibility of reducing thermodynamic costs by employing stochastic times. In Sec. <ref> we provide illustrations with numerical simulations of DFAs processing binary input strings that we use to test the theoretical results of Sec. <ref>.
In Sec. <ref> we further develop our theory to relate the values of several important thermodynamic quantities at stopping times with acceptance and rejection probabilities in the DFA. Sec. <ref> is then devoted to sketch how our theory can be applied to investigate the thermodynamics of multiple concatenations of runs of a DFA, where after each run ends the system is reset to an initial start state
and the next run begins. We conclude with <ref>, where we present our main conclusions and further discuss future research directions motivated by our findings. Mathematical details of the derivations, proofs, and extra discussions are left to corresponding appendices.
§ MARKOVIAN COMPUTATIONS
In the following, we make the assumption that the implementation of a task on a given computer is realized through a physical process which induces Markovian (discrete) dynamics over a set of relevant computational states. The actual physical process being modeled will be a generic physical, chemical or biological system, whose dynamics can be described at a microscopic level over a set of hidden degrees of freedom <cit.>, here assumed to be not directly accessible. In particular, it is customary to model a computation as a continuous-time Markov chain (CTMC) <cit.>.
In “synchronous” physical computers — such as all real-world digital computers — this CTMC is driven externally following a periodic protocol induced, e.g., by the AC electric current powering a computer. Such underlying periodic driving might be ignored in modelling the computational process, by describing its evolution by coarse graining it in time, and this results in an effective model given by a time-homogeneous DTMC. Throughout this paper, we will work at such a coarse-grained level, and consider computational processes as generic DTMCs with time-independent transition probabilities. In doing so, we will map the underlying physical process to the DTMC dynamics of the (symbolic) computational states to formulate the actual physical dissipation in a thermodynamically consistent manner.
§.§ Stochastic computational processes
We consider computational processes described by a DTMC that can take values over a discrete set of N≥ 1 computational states x_t∈𝒳, with t=0,1,2…. For simplicity, we assume that the transition probabilities between the computational states are time independent (however, our results can be extended to time-dependent transition probabilities).
We write P(x_t+1 | x_t) for the conditional probability of jumping to state x_t+1 given that the previous state was x_t in a single time-step or iteration of the computational process. (Note that in a DTMC x_t can be the same as x_t+1, allowing for time instances where the system dwells in a given state.) We write ρ_t(x) for the probability of being in state x at time t, given an ensemble of realizations of the Markovian process. The associated discrete-time master equation ρ_t+1=𝐖ρ_t, where ρ_t is an N× 1 column vector and [𝐖]_i,j = P(x_t+1 =i | x_t = j) is the transition probability matrix. The transition matrix 𝐖 has at least one fixed point with distribution π(x) such that 𝐖π = π, and if aperiodic and irreducible, π becomes the unique stationary distribution in the long time run, that is, lim_t →∞ρ_t = π. However, what follows does not require π to be unique.
We recall readers that throughout the paper we will denote τ the limit time for a computation, i.e., the maximum time that can be spent to execute a computation, and assume it to be fixed. The probability of a sequence 𝐱_[0,τ]=x_0,x_1,…, x_τ is
P(𝐱_[0,τ]) = ρ_0(x_0) ∏_t=0^τ-1 P(x_t+1 | x_t).
Here we allow for arbitrary initial distributions ρ_0(x_0) and transition probabilities P(x_t+1 = j | x_t = i) = P(j | i). In particular, some of the transitions might be bidirectional (i ↔ j) and others unidirectional (i → j). Bidirectional transitions are characterized by conditional probabilities verifying P(i | j) > 0 whenever P(j | i)>0, while for unidirectional ones we can have P(i | j) = 0 with P(j | i)>0. We notice that exactly because of the existence of unidirectional transitions, it is mandatory to relax the condition of local detailed balance, which is arguably among the most common assumptions adopted in the formulation of stochastic thermodynamics <cit.>.
One of the main quantities of interest in stochastic thermodynamics is stochastic entropy production (EP) which equals the logarithm of the ratio between forward and time-reversed path probabilities of a thermodynamic process <cit.>.
This quantity, however, generically depends on the details of the underlying physical process implementing the computation, hence is not directly accessible unless certain simplifying assumptions, such as the condition of local detailed balance.
Nevertheless, here we aim to obtain a thermodynamic description of the computational processes as deduced solely from the (discrete-time) dynamics of the visible variables defining the computation x_t ∈𝒳. While our analysis holds for arbitrary DTMCs, we focus on digital synchronous computers, which undergo a time-homogeneous dynamics over discrete time, and which we connect to the underlying physical process generating it in a simple manner. This allows us to express and bound the entropy production of the computational task implemented by the DTMC, alongside the work and heat dissipated into the environment. For simplicity, we take the continuous-time physical process that implements the time-stationary DTMC to be periodic and choose units so that the period of the physical process is 1.
As an example (and to help ground the reader's intuition), suppose that our time-homogenous DTMC is implemented by a time-inhomogeneous CTMC. It is well-known that in general, this requires that the CTMC evolves over an enlarged version of the DTMC's state space 𝒴⊇𝒳, which includes “hidden states” in addition to the “visible" states of the DTMC <cit.>. In particular, this is true when the DTMC is the update function of a computational machine.
Therefore our assumption that the continuous-time physical process is periodic implies that the time-inhomogeneous CTMC is periodic. As a result, the thermodynamics arising in any single iteration of the physical system (implementing the computational machine that starts at discrete time t in some state y(t) ∈𝒴) is independent of t. In the following, for further simplicity (and to ensure a time-homogeneous DTMC) we also assume that non-computational degrees of freedom in 𝒴 are reinitialized within every single iteration to their (possibly nonequilibrium) initial states.
§.§ Mismatch cost
Enlarging our original description over 𝒴 to include all relevant physical variables of the computer is crucial to define the associated entropy production and other relevant thermodynamic costs of computation. Here we show that this can be done in a standard way.
In particular, suppose that we are interested in some generic thermodynamic average cost function that can be written as
𝒞(τ) = S(ϱ_τ) - S(ϱ_0) + F
:= S(G ϱ_0) - S(ϱ_0) + F
where S(ϱ)=-∑_i ϱ(i) lnϱ(i) is Shannon entropy, ϱ_0 is any initial distribution over the (extended set of) states of the system, G is the linear map that transforms that distribution to an associated ending distribution ϱ_τ, and F is an arbitrary linear functional of the initial state.
As a canonical example, in CTMC-based stochastic thermodynamics obeying local detailed balance,
the EP generated during a process is given by <ref> by setting F equal to the average entropy flow to the environment:
F = ∫_0^τdt ∑_v ∑_i, j ϱ_t(j) K^v_ij(t) ln[K^v_ji(t)/K^v_ij(t) ]
where K^v_ij(t) is the rate matrix associated to thermal reservoir v,
and the rate matrix of the CTMC is ∑_v K^v_ij(t) [Note that formula for F is indeed linear in ϱ_t]. For different choices of F, 𝒞(τ) gives different thermodynamic quantities besides EP, such as the drop in nonequilibrium free energy of the system during the process <cit.>, among many others.
For any such cost 𝒞 in Eq. (<ref>), and any physical process represented by G, the prior distribution is defined to be the initial distribution that minimizes 𝒞(ϱ). (It is called the prior because it is, formally speaking, a prior distribution for calculating the posterior probability of an initial state of a thermodynamic process given its final state <cit.>.) We write the prior as ϱ_min. The average mismatch cost is
ℳ(τ) := D(ϱ_0 || ϱ_min) - D(G ϱ_0 || G ϱ_min)
where D(ϱ_1 || ϱ_2) = ∑_y ϱ_1(y)ln[ϱ_1(y)/ ϱ_2(y)], denotes the Kullback-Leibler (KL) divergence between the distributions ϱ_1 and ϱ_2 for the case of a discrete random variable. Then we have, for all ϱ_0
𝒞(τ) = ℳ(τ) + ℛ(τ),
where ℛ(τ) is an extra non-negative contribution. Expressions analogous to <ref> hold for other state spaces, e.g., real-valued states, density matrices, etc. Moreover,there are no assumptions of detailed balance or the like in the derivation of <ref>; it holds purely for mathematical reasons.
The drop in KL divergence ℳ(τ) is usually called the “mismatch cost”, and the additive term ℛ(τ) is called the “residual cost” <cit.>.
For the trajectory level version of mismatch cost in Eq. (<ref>) see <ref>.
In the specific case in which F is identified with the entropy flow in Eq. (<ref>), residual cost is often called “residual EP”. See <ref> for a discussion of residual cost,
and why we ignore it in this paper.
By the data-processing inequality for KL divergence <cit.>, ℳ(τ) is never negative. Moreover, it can be shown that the prior ϱ_min in Eq. (<ref>) has full support (see App. A in Ref. <cit.>), which ensures that the mismatch cost is finite. Note also that the mismatch cost formula (<ref>) is based on evaluating ϱ at both the beginning and the end of the time interval [0, τ]. This means this general formula applies to any physical process that maps ϱ_0 to ϱ_τ, for any choice of 𝒞, i.e., any choice of the linear functional F. All the (messy) physical details of the process and the precise choice of F are buried in the prior and the residual cost. In the following, unless explicitly stated otherwise we will focus on the case in which the cost function in Eq. (<ref>) is EP, i.e., we will use F in Eq. (<ref>).
§.§ Strictly positive lower bounds for
dissipation in periodic processes
As described above, in real world (synchronous, digital) physical computers, the underlying physical process implementing each iteration of the computer is identical. This is true whether that physical process is a CTMC, a quantum operation, and so on. As noticed in Ref. <cit.>, this means that the prior ϱ_min for each iteration of the computer is the same [Here we are interested in the marginal prior distribution over computational states x ∈𝒳 only, not over the set 𝒴 containing hidden states.]. Using also the fact that non-computational variables are reinitialized in every single iteration (period) of the computational process, we can write the overall mismatch cost for any computation that takes exactly τ iterations in terms of computational variables as:
∑_t=0^τ-1ℳ(ρ_t) = ∑_t=0^τ-1[D(ρ_t || μ) - D(ρ_t+1 || μ^') ],
where ρ_t(x) = ∑_y ∉𝒳ϱ_t(y) is the (marginal) distribution over computational states x ∈𝒳 only, μ(x) = ∑_y ∉𝒳ϱ_min(y) is the prior over computational states at the beginning of (every) iteration, and μ' = 𝐖μ the prior at the end of every iteration (i.e., it is μ evolved to the end of the iteration).
We note that if ρ_0 = μ in Eq. (<ref>), then the first difference of KL divergences being summed equals 0. However, unless 𝐖 is degenerate (e.g., the identity matrix), 𝐖ρ_0 ρ_0, and therefore 𝐖ρ_0 μ. This in turn means that the second difference of KL divergences being summed in (<ref>) does not equal 0 (so long as 𝐖 is not logically invertible, i.e., not a permutation matrix). Therefore in this case, the overall sum will be strictly positive. This argument can be extended to prove that so long as 𝐖 is not logically invertible (and ρ_0 is not a fixed point of the dynamics), the mismatch cost sum in Eq. (<ref>) is not zero (see Appendix <ref>).
Since the above reasoning is true for all actual μ, we can lower bound the sum (<ref>) by minimizing over all distributions λ in the unit simplex Δ_X, whether or not they are a valid prior in some physical scenario:
∑_t=0^τ-1 ℳ(ρ_t) ≥inf_λ∈Δ_X ∑_t=0^τ-1 [D(ρ_t || λ) - D( ρ_t+1 || λ^')] > 0
with again λ ' = 𝐖λ (see also Ref. <cit.>). The precise prior μ in Eq. (<ref>) for the EP cost function will depend on the details of the precise physical process under consideration. On the other hand, the sum (<ref>) is independent of those details. We therefore obtain a strictly positive lower bound on EP, given in toto by ρ_0 and 𝐖. This strengthened second law arises solely from the fact that we have a periodic process with a non-logically invertible 𝐖. Moreover, because minimization in <ref> is over all possible priors, it provides a lower bound on all costs that can be written as in <ref>. We therefore refer to it as the minimal dissipation.
As an example, suppose that our DTMC is the dynamics of a noise-free digital computer, with update function f : X → X. Plugging in Eq. (<ref>), the minimal possible EP is
min_λ∈Δ_X ( ∑_t=0^τ-1 ∑_x ∈Ω_t ρ_0(f^-t (x)) ln[ ρ_0(f^-t(x))/λ(x) ]
- ρ_0(f^-t-1(x)) ln[ρ_0(f^-t-1(x))/λ(f^-1(x)) ] )
where Ω_t in the sum above is shorthand for the set of states that have nonzero probability
under the actual distribution ρ_0(x) iterated t times. So long as f is not just a permutation of the states of the computational machine that lie with the support of ρ_0, this is a strictly positive lower bound on the dissipation incurred by any synchronous physical device that implements that computation.
In the sense that it only depends on the conditional distribution W and the initial distribution ρ_0, the bound for periodic processes in <ref> is similar to the generalized Landauer's bound. In particular, the thermodynamic uncertainty relations and speed limit theorems are also lower bounds on EP that depend on the initial distribution over states and the discrete time conditional distribution of the dynamics. However, unlike the lower bound above, those other bounds depend on other properties of the process besides the initial distribution and the conditional distribution giving the dynamics (for example current precisions or expected activities). In this sense, the minimal dissipation given in <ref> is more powerful than those other lower bounds on EP (a closed form of this result in terms of Jensen-Shannon divergence has been also reported very recently in Ref. <cit.>).
In this paper, we calculate mismatch costs by summing the cost over single iterations of a computational machine operating periodically, as in Eq. (<ref>). In general this does not equal the standard mismatch cost for the entire computation, with an overall prior and a single drop in KL divergence between initial and final time τ. We remark that, to the authors' knowledge, the necessary and sufficient conditions for this quantity to be larger than the one we use in this paper are not known. However there is a particularly interesting case in which these two expressions become the same, namely, when EP is minimized at the stationary state of the DTMC, i.e. the prior μ coincides with π. In such case we recover from Eq. (<ref>) the well-known decomposition of EP into adiabatic and non-adiabatic contributions <cit.>, where mismatch cost reduces to non-adiabatic EP (also called excess EP <cit.>) and the residual cost becomes adiabatic EP (house-keeping heat <cit.>).
Finally it is also worth mentioning that, while we focused here on the case of computational processes, the quantities discussed above are general and apply for any periodic process leading to a DTMC over a restricted set of “visible" variables.
§.§ Deterministic Finite Automata
An important class of computational machines that can be described within our framework are the deterministic finite automata (DFA). There are several different, very similar definitions of DFA, some of which overlap with common definitions of “finite state machines”. To fix the discussion, here we adopt the following definition. A deterministic finite automaton is a 5-tuple (Q, θ, q_0, A, f)
where:
* Q is a finite set of (logical) states;
* θ is a finite (input) alphabet;
* q_0 ∈ Q is the start state;
* A ⊆ Q is the set of accept states; and
* f : Q ×θ→ Q is the update function,
mapping a current input symbol and the current logical state to a next
logical state.
A finite string of successive input symbols, i.e., an input string ω∈θ, is sometimes called an (input) word. To operate a finite automaton on a particular input word, one
begins with the automaton in its start state, and feeds that state together with the first symbol in the input word into the update function, to produce a new logical state. Then one feeds in the next symbol in the input word (if any), to produce a next logical state.
Note that one can represent any given DFA's update function as a directed graph, where each edge (q_1, q_2) taking logical state q_1 to state q_2 is labelled by the input symbols that would cause that transition (see <ref> (c) and <ref> for illustrations).
Our analysis of stochastic computational processes (as introduced above) in DFAs requires assigning probabilities to the input words (or to the symbols inside them) that are fed into the automaton, as well as to identify the computational states of the DTMC 𝒳, which may coincide or not with the set Q of logical states of the DFA (tipycally 𝒳 may contain more variables as e.g. previously processes symbols). An important contribution of our work will be to show how one can do this analysis even though the dynamics of a DFA — its update function — is deterministic and often non-invertible (i.e., unidirectional), and given that the initial distribution over states of
the DFA (though not over the input words) is a delta function, centered on the start state (i.e., leading to absolute irreversibility).
A typical question of interest in computer science is whether the DFA is in an accept state of the set A after the last symbol from the input word is processed. If that is the case, one says that the automaton accepts that input word. In this way any given automaton uniquely specifies a language of all input words that that automaton accepts, which is called a regular language. Importantly, any particular DFA can process input words of arbitrary length [This means that one cannot model a given DFA as some specific (and therefore fixed width) circuit, in general. The DFA will have properties that are not captured by that circuit.
In this sense, individual DFAs are computationally more powerful than individual circuits.], and in general may enter and exit its set of accepting states multiple times, before the end of the input word. While the definition of whether an input word is accepted only depends on whether the ending logical state is an accepting state, the statistics of whether, how often, and precisely when a given DFA enters an accept state (when fed words generated by some given distribution) can be of independent interest.
§ INTRINSIC THERMODYNAMICS OF COMPUTATIONS AT FIXED TIMES
The mismatch cost sum introduced in Eq. (<ref>) depends only on the computational degrees of freedom involved in the original DTMC dynamics and provides a lower bound on the average entropy production generated by the machine implementing the computation. It is hence a particularly useful candidate to assess the intrinsic (minimal) thermodynamic costs of computations. The prior μ(x) encodes the specific details of the physical implementation of the computational process. Concern for such details can be even avoided by considering the distribution ν(x) given by the infimum of Eq. (<ref>), which still provides a useful (positive) bound on EP.
To begin, we construct a stochastic description based on thermodynamic quantities that can be computed by introducing an auxiliary process. This process is defined in terms of the
“forward" discrete-time dynamics P(j | i), the initial distribution of that dynamics, ρ_0(x), and a reference distribution r(x) over computational states. The reference r(x) is arbitrary, and in particular could be chosen to obtain stochastic versions of the mismatch cost sum in Eq. (<ref>) [r(x)=μ(x)] and the minimum dissipation in Eq. (<ref>) [r(x)=ν(x)].
§.§ Thermodynamic costs of periodic
computations at the fluctuating level
We start by introducing the discrete-time auxiliary dynamics of the auxiliary process 𝐖_i,j, with transition probabilities defined from the ones in 𝐖 by
P̅(i | j) ≡P(j | i) r(i)/ r^'(j),
where r ' = 𝐖 r is the reference distribution r evolved for one iteration, i.e. r'(j)=∑_i P(j | i) r(i) [Note that by “P(j | i)” we do not mean the Bayesian inverse of the forward transition matrix P(i | j) — rather we mean that forward transition matrix evaluated for a transposed choice of the initial and final states.]. This auxiliary process is a bona fide Markov chain with ∑_i P̅(i | j) = ∑_i [P(j | i)r(i)]/r^'(j) =1 and 0 ≤P̅(i | j) ≤ 1 [This can be easily checked from ∑_j P̅(i | j) r^'(j) = ∑_j P(j | i) r(i) = r(i), which immediately implies [𝐖̅] r^' = r.].
Moreover, r^' transforms back into r in a single iteration under 𝐖. That is, W̅ corresponds to the Bayesian inverse of W with respect to the reference distribution r, leading to perfect retrodiction for the distribution r <cit.>.
To fully specify the auxiliary dynamics we must specify its initial distribution;
here we will always set it to distribution of the original actual dynamics at its limit time, i.e., ρ̅_0(x) = ρ_τ(x). So the joint distribution of a trajectory 𝐱_[0,τ] under the auxiliary dynamics is
P̅(𝐱_[0,τ]) =
ρ̅_0(x) ∏_t=0^τ- 1P̅(x_t+1 | x_t)
Note that this choice of the initial distribution of the auxiliary process is not restricted by the choice of r in any way. Note as well that P̅(i | j) does not necessarily coincide with the transition probabilities induced by the time-reversed implementation of the underlying physical process, but it is solely defined from the distribution r(x) and the original Markov chain transition probabilities.
Using Eqs. (<ref>) and (<ref>), we can write the probability of a time-reversed discrete-time trajectory, Θ𝐱_[0,τ] = x_τ,x_τ-1,…, x_0, under the auxiliary dynamics as
P̅(Θ𝐱_[0,τ]) = ρ̅_0(x_τ) P̅(x_τ-1 | x_τ) …P̅(x_0 | x_1)
= ρ_τ(x_τ) ∏_t=0^τ-1 P(x_t+1 | x_t)r(x_t)/r'(x_t+1) .
The ratio between the path probability to observe a given trajectory of states, and the path probability to observe its time reversal under the auxiliary dynamics is
Σ(𝐱_[0, τ]) ≡ ln [P( 𝐱_[0,τ]) / P̅(Θ𝐱_[0,τ])]
= ∑_t=0^τ -1[ lnρ_t(x_t)/r(x_t) - lnρ_t+1(x_t+1)/r'(x_t+1)],
providing us, for r = μ, a stochastic version of the mismatch cost sum in Eq. (<ref>), and for r = ν, the minimal dissipation in Eq. (<ref>). The functional Σ(𝐱_[0, τ]) is an example of a “Σ-entropic functional", as introduced in Ref. <cit.>.
The specific choice for the transition probability of the auxiliary dynamics introduced in Eq. (<ref>) is crucial for avoiding divergences that would be induced by unidirectional links if we evaluate expressions like ln[P(i | j)/P(j | i)] — expressions that appear in most functionals associated with entropy production. This makes the functional Σ given by Eq. (<ref>) suitable to tackle fluctuations of Markovian processes with unidirectional transitions, which are precisely the (idealized) dynamics of many computational processes.
Here and in the following, as shorthand, we will often write trajectory-level quantities such as Σ(𝐱_[0, τ]) simply as Σ(τ), with the precise trajectory left implicit. Following such shorthand notation, equation (<ref>) can be decomposed as
Σ(τ) = Δ S_sys(τ) - Δϕ(τ).
where we write the change in stochastic Shannon entropy of the computer as
ΔS_sys(𝐱_[0, τ]) := - ∑_t=0^τ-1 [ lnρ_t(x_t) - lnρ_t+1(x_t+1) ]
= - lnρ_τ(x_τ) + lnρ_0(x_0) ,
and write the change in the nonequilibrium potential as
Δϕ(𝐱_[0, τ]) := ∑_t =0^τ-1 [-lnr^'(x_t+1) + lnr(x_t) ].
Such non-equilibrium potentials have been fruitfully employed in steady-state thermodynamics <cit.>, and account for the excess of entropy absorbed from the environment during the computation 𝐱_[0,τ] whenever the state of the system ρ_t differs from the distribution r along its time evolution.
Suppose that the initial distribution ρ_0(x) has full support. Then if we average Eq. (<ref>) over P( 𝐱_[0,τ]) we get ⟨P̅(Θ𝐱_[0,τ]) / P( 𝐱_[0,τ]) ⟩ = 1, which is an integral fluctuation relation <cit.>, ⟨ e^-Σ(τ)⟩ = 1. Moreover, ⟨ln[P( 𝐱_[0,τ])/P̅(Θ𝐱_[0,τ])]⟩≥ 0 is a KL divergence, which can be rewritten in an appealing form as
⟨Σ(τ)⟩ = ∑_t=0^τ -1[ D (ρ_t || r) - D (ρ_t+1 || r') ] ≥ 0.
We notice that for the choice r = μ we recover the expression for mismatch cost sum in Eq. (<ref>) while for r = ν we obtain Eq. (<ref>), as expected. Crucially, for the two choices r = μ and r = ν, the quantity ⟨Σ(τ) ⟩ provides a lower bound on the total average entropy production incurred in the physical implementation of the computational process, and therefore we may refer to it as the intrinsic mismatch cost associated to a given computation. We remark that here and above averages are over trajectories of fixed length τ, that is, ⟨Σ(τ)⟩≡𝔼(Σ(τ)).
For more general choices of r, the quantity Σ(τ) can still be defined (as long as the distribution r has full support over 𝒳), however it cannot be guaranteed in general that ⟨Σ(τ) ⟩ would provide a lower bound on the underlying entropy production anymore.
In particular, by taking r = π, the stationary state of the DTMC, Σ(τ) becomes the discrete-time non-adiabatic entropy production for a relaxation process, whose average reads
⟨Σ(τ)⟩ = D (ρ_0 || π) - D (ρ_τ || π) ≥ 0,
thus we recover the expression for EP proposed by Spohn <cit.> (see also Ref. <cit.>). Remarkably in this case ⟨Σ(τ) ⟩ becomes non-extensive in time, contrary to the general case [c.f. (<ref>)]. As a consequence, the steady state π of the DTMC (whenever aperiodic and irreducible) becomes the natural candidate for the prior ν providing the infimum in Eq. (<ref>) in the large time limit. Therefore we expect the non-adiabatic entropy production in Eq. (<ref>) to provide the minimum dissipation of the computation in many cases of interest. However it is work remarking that for ensuring the average non-adiabatic entropy production to be a lower bound on the EP would require π to share support with the initial distribution ρ_0 —which would often not be the case in the computational context— and π being also invariant state in the time-reversed (underlying) physical dynamics of the computer <cit.>.
§.§ The role of absolute irreversibility
In many models of computation in TCS, the initial distribution ρ_0(x) over the states of the computational machine is restricted to a subset of computational states in 𝒳. For instance, almost any automaton —in particular, not just a DFA but also a TM— starts
in a single, predetermined state, x_0. Such a system
may have a delta-function initial distribution, ρ_0(x) = δ_x, x_0. For such an initial distribution the quantity e^-Σ(τ) = P̅(Θ𝐱_[0, τ])/P(𝐱_[0, τ]) may become ill-defined as there might be trajectories for which P(𝐱_[0,τ]) = 0, but P̅(Θ𝐱_[0,τ]) > 0, e.g., trajectories in the auxiliary dynamics that do only reach states different from x_0. This phenomenon has been often referred to as absolute irreversibility <cit.>.
Following the techniques in Refs. <cit.> one can circumvent the divergence associated with absolute irreversibility by restricting the averages over sets of trajectories for which the intrinsic mismatch cost, Σ(τ), is well defined. Adopting the language of modern probability theory <cit.>, we call such sets filtrations (see also <cit.>). In particular, we denote ℱ the filtration containing all possible trajectories 𝐱_[0,τ] taking place in [0,τ]. Similarly, we call ℱ_AI the filtration containing all “absolutely irreversible" trajectories, that is, trajectories for which P(𝐱_[0,τ]) = 0, but P̅(Θ𝐱_[0,τ]) > 0. On the other hand, we denote the complementary set of “absolutely continuous" trajectories as ℱ_AC, such that ℱ = ℱ_AC∪ℱ_AI.
Using these definitions, an extended version of the integral fluctuation theorem (IFT) for the intrinsic mismatch cost is shown,
⟨ e^-Σ(τ)⟩ = 1 - γ_τ
where 0 ≤γ_τ≤ 1 the total probability that the time-reversed picture of any absolutely irreversible trajectory (i.e. belonging to ℱ_AI) occurs in the auxiliary dynamics
γ_τ = ∑_𝐱_[0, τ]∈ℱ_AIP̅(Θ𝐱_[0, τ])≤ 1.
Applying Jensen's inequality ⟨ e^x⟩≥ e^⟨ x⟩ to the IFT (<ref>) we obtain a lower bound on the intrinsic mismatch cost, implying a minimum dissipation due to the restricted initial condition:
⟨Σ(τ) ⟩≥ - ln[1-γ_τ] ≥ 0,
where the second inequality follows from γ_τ≥ 0 and hence extends the applicability of Eq. (<ref>) to systems showing absolute irreversibility. We remark that here absolute irreversibility arises because of the restricted initial distribution, but not because of the unidirectional transitions, since they have been flipped in the auxiliary dynamics according to Eq. (<ref>). Fluctuation theorems similar to Eq. (<ref>) has been previously derived within the canonical framework of stochastic thermodynamics for entropy production <cit.> and standard mismatch cost <cit.>, as well as in the inclusive Hamiltonian framework for entropy production <cit.>.
§ THERMODYNAMICS OF COMPUTATIONS AT STOCHASTIC STOPPING TIMES
We now extend our analysis to investigate the thermodynamics of computations which first reach a
computational state of interest at a time that varies depending on the random input provided to the computer. In doing so, we extend the martingale theory for stochastic thermodynamics <cit.> to accommodate unidirectional transitions and arbitrary initial distributions leading to absolute irreversibility.
Consider a random sequence of τ bits sequentially fed into a computer (e.g. a DFA) see also Fig. <ref>:
0 0 0 1 0 1… 0 1 1 1_τ bits,
with τ≥ 1 being the word length processed by the machine. While processing a specific sequence, the computer jumps between its computational states, as described in Sec. <ref>.
We are interested in the thermodynamics of the (physical implementation of the) computer during the time from when it starts to a stopping time, 𝒯, that is until when a stopping condition is met. For example we will often consider that the stopping condition is simply that the computer has for the first time reached an accept state. Note that this stopping time generally takes a different value when processing different words. Since the words are generated by sampling a distribution, this means that the stopping time is a random variable.
Generalizing from this case to give a fully formal definition, a stopping time is the earliest instance when a particular condition concerning the entire trajectory generated by a stochastic process is met:
𝒯(𝐱_[0,t]) := inf{ t∈ [0,τ] | 𝐱_[0,t]∈Ω},
where Ω⊆ℱ denotes the set of trajectories satisfying the stopping condition. For example, Ω might be the set of trajectories of a given DFA that have reached an accept state at least once.
Note that its definition in Eq. (<ref>) involves a limit time τ. So the stopping time associated with each stochastic trajectory is a bounded random variable that obeys 0 ≤𝒯≤τ. As shorthand, from now on we will typically just write “𝒯", leaving the precise trajectory 𝐱_[0,𝒯] implicit. It is also worth remarking that the computational machine does not necessarily stop functioning at , but this variable can just signal to us the time at which a specific computation is processed (e.g. accepting a word). We will therefore sometimes refer to in this context as the computation time, which is a particular instance of a (bounded) stopping time.
§.§ Martingale theory with absolute irreversibility
Inspired by <cit.>, we now introduce the stochastic distinguishability between the computational process and the auxiliary process. Stochastic distinguishability (with respect to time τ) evaluated at time t ≤τ is defined as
δ_τ(t) := lnρ_t(x_t)/ρ̅_τ - t(x_t) ,
where ρ̅_τ - t(x) is the probability distribution of the auxiliary process defined in <ref>, evaluated at the conjugate time τ - t for the state x_t.
(Recall that the auxiliary dynamics has initial distribution ρ̅_0(x) = ρ_τ(x), i.e., it is the distribution of the original dynamics at the limit time τ.)
Stochastic distinguishability is a measure of the asymmetry between the original and the auxiliary dynamics and plays a crucial role in martingale theory for stochastic thermodynamics of non-stationary processes <cit.>.
It will be useful to consider an associated process involving the stochastic distinguishability,
M_τ(t) := e^-Σ(t) - δ_τ(t)
= ρ̅_τ - t(x_t)/ρ_t(x_t)[∏_s=0^t-1r(x_s)/ρ_s(x_s)ρ_s+1(x_s+1)/r'(x_s+1)].
In the second line of Eq. (<ref>) we used the first equality in Eq. (<ref>) together with Eq. (<ref>) and the second line in Eq. (<ref>).
Notice that M_τ(τ)=e^-Σ(τ) because δ_τ(τ) = 0. In general though δ_τ(t) ≠ 0 for t<τ, and so M_τ(t) ≠ e^-Σ(t) for such t.
An important property of M_τ is that the expectation of M_τ(τ) conditioned on a fixed trajectory ending at a time 0 ≤ t ≤τ, satisfies
⟨ M_τ(τ) | 𝐱_[0,t]⟩ = M_τ (t) [ 1 - α_τ (t)]
where we introduced the quantity defined by
α_τ(t) := ∑_𝐱_[t+1, τ] ∈ℱ_AI P̅(Θ𝐱_[t, τ])/ ρ̅_τ- t(x_t) ≤1, t<τ
and α_τ(τ) := 0 (See Appendix <ref> for details.) Combining Eq. (<ref>) with the fact that α_τ(t) ≤ 1 we establish that M_τ(t)=e^-Σ(t) - δ_τ(t) is a supermartingale:
⟨ M_τ(τ) | 𝐱_[0,t]⟩≤ M_τ(t),
i.e., its conditional expectation given a fixed trajectory of length t < τ monotonically decreases over time.
Note that for t=0 one has Σ(0) = 0 and hence Eq. (<ref>) yields the IFT with absolute irreversibility [cf. Eq. (<ref>)]:
⟨e^-Σ(τ) ⟩= ⟨M_τ(τ) ⟩= ∑_x_0 ρ_0(x_0) ⟨M_τ(τ) | x_0 ⟩
= ∑_x_0 ρ_0(x_0) M_τ(0) [1 - α_τ(0)] = 1 - γ_τ,
where we have used Eq. (<ref>) in the last equality.
In addition, in the absence of absolute irreversibility, ℱ_AI is the empty set and α_τ(t) = 0 for all t∈[0,τ]. In such a case M_τ(t) in Eq. (<ref>) becomes a martingale. Therefore in that limit we would be able to use the analysis in <cit.> on the thermodynamics of systems with stochastic stopping times. However, that analysis does not directly apply for generic initial states ρ_0(x) without full support.
§.§ Integral fluctuation relations with absolute irreversibility at stopping times
Fortunately, the fact that M_τ (t) is a supermartingale rather than a martingale when our system has absolute irreversibility does not prevent us from analyzing its thermodynamics at stopping times. To carry out such analysis, here we closely follow the derivation of Doob's optimal stopping theorem for martingales, generalizing it to apply to supermartingales that are written as in Eq. (<ref>).
As elaborated in Appendix <ref>, this generalized form of the optimal stopping theorem provides an fluctuation theorem at stopping times, which is valid even in the presence of absolute irreversibility:
⟨ e^- Σ(𝒯) - δ_τ(𝒯)⟩ = ⟨ M_τ(𝒯) ⟩ = 1 - Γ_τ,
where 𝒯≤τ is the (stochastic)
stopping time,
Γ_τ∈ [0, 1] is a contribution from absolute irreversibility, and therefore ⟨ e^- Σ(𝒯) - δ_τ(𝒯)⟩≤ 1. Since ⟨ . ⟩ is an average over trajectories, and different trajectories have different stopping times, ⟨ e^- Σ(𝒯) - δ_τ(𝒯)⟩ involves averaging over (stochastic) values of 𝒯. This introduces statistical coupling between the time 𝒯 and the value Σ(𝒯).
The quantity Γ_τ appearing in (<ref>) is an average of the functional e^-δ_τ(𝒯) evaluated at stopping times 𝒯 for trajectories leading to absolute irreversibility:
Γ_τ := ∑_𝒯=0^τ∑_𝐱_[0,𝒯]∈ℱ_AI^(𝒯)P̅(Θ𝐱_[0,𝒯]) ρ̅_τ-𝒯(x_𝒯)/ρ_𝒯(x_𝒯).
To understand its meaning intuitively, first note that the second summation in Γ_τ is done over trajectories 𝐱_[0,𝒯] that belong to ℱ_AI^(𝒯), that is, trajectories verifying the stopping condition for the first time at 𝒯, but that have zero probability to occur in the original process P(𝐱_[0,𝒯]) = 0, due to the restricted shape of the initial distribution ρ_0(x). We notice also the presence of the distribution ρ̅_τ - 𝒯(x), which is due to δ_τ(𝒯).
That is, Γ_τ consists of the total probability of trajectories starting at the stopped point x_𝒯 according to distribution ρ̅_τ - t(x), and not turning back to the set of states with ρ_0(x) > 0 under the auxiliary dynamics. Recall also that the reference distribution r determining the precise meaning of Σ(𝒯) appears in <ref> only implicitly, due to the definitions of P̅ and ρ̅_t.
The inequality Γ_τ≤ 1 is saturated when all trajectories are in the set ℱ_AI = ℱ, for which the sum over all trajectories in Eq. (<ref>) is obtained, that is Γ_τ = ∑_t=0^τ∑_𝐱_[0,t]∈ℱ^(t)P̅(Θ𝐱_[0,t]) ρ̅_τ - t(x_t)/ ρ_t(x_t)= 1. Moreover, we also have Γ_τ≥ 0, since it is a sum of probabilities. Whenever the initial distribution ρ_0(x) is not restricted in the state space, we obtain Γ_τ = 0, and recover the standard form of the fluctuation theorem at stopping times for non-stationary processes <cit.>.
It is worth remarking here that our previous results for fixed times [Eqs. (<ref>) and (<ref>)] can be directly obtained from Eqs. (<ref>) and (<ref>) by letting 𝒯 = τ, i.e., when all trajectories are stopped at the final time τ, as we also discuss below in more detail. Our results thus provide an extension of Martingale theory to cover different versions of mismatch costs in physical scenarios with absolute irreversibility, where martingales can be transformed into super-martingales via the correction term α_τ(t) in Eq. (<ref>), and stopping-time fluctuation relations can be derived from them.
Moreover, using the fact that M_τ(t) is a supermartingale [c.f. Eq. (<ref>)], we can also readily apply Doob's optional sampling theorem <cit.> for supermartingales to obtain (see Appendix <ref>):
⟨ e^-Σ(_2) - δ_τ(_2)⟩≤⟨ e^-Σ(_1) - δ_τ(_1)⟩ ,
where _1 and _2 are two stopping times, ordered such that P(_2 ≥_1)=1, but otherwise arbitrary. Taking _1 = and _2 = τ, the above Eq. (<ref>), together with the FT for stopping times [Eq. (<ref>)] and fixed-times [Eq. (<ref>)] implies:
Γ_τ = 1- ⟨ e^-Σ(_2) - δ_τ(_2)⟩
≤ 1 - ⟨ e^-Σ(τ)⟩ = γ_τ,
where we have used δ_τ(τ)=0. The above inequality implies that the absolute irreversibility term at stopping times Γ_τ is always smaller than its fixed-time counterpart γ_τ, that is, absolute irreversibility implies always greater dissipation at fixed-times than at stopping times.
§.§ Second-law inequalities at stopping times: universal lower and upper bounds
If we apply Jensen's inequality ⟨ e^x⟩≥ e^⟨ x⟩ to the fluctuation theorem of Eq. (<ref>) we derive a second-law inequality at stopping times:
⟨Σ(𝒯)⟩≥ - ⟨δ_τ(𝒯) ⟩ - ln [1 - Γ_τ].
This sets a strict lower bound on the average dissipation incurred by a given computation up to an arbitrary stopping time , from its time-reversal-symmetry breaking (as quantified by ⟨δ_τ(𝒯) ⟩) and the absolute irreversibility (as quantified by Γ_τ).
Moreover, Γ_τ≥ 0 implies that -ln [1 - Γ_τ]≥ 0. Therefore Eq. (<ref>) also implies the simpler bound
⟨Σ(𝒯)⟩≥ - ⟨δ_τ(𝒯) ⟩.
These inequalities suggest that ⟨Σ(𝒯)⟩ might be negative whenever ⟨δ_τ(𝒯)⟩≥ - ln [1 - Γ_τ] ≥ 0, as we discuss in detail further below.
Any concave function [such as ln (x)] of a supermartingale yields another supermartingale by Jensen's inequality. Therefore the supermartingale property of M_τ(t) also implies that ln[M_τ(t)] = -Σ(t) - δ_τ(t) is supermartingale. So Σ(t) + δ_τ(t) is a submartingale, i.e. it conditionally increases with time. If we now invoke Doob's optional sampling theorem for submartingales we get the inequality:
⟨Σ(_2) + δ_τ(_2) ⟩≥⟨Σ(_1) + δ_τ(_1)⟩ ,
where again _1 and _2 are two ordered stopping times with P(_2 ≥_1)=1. This inequality has several implications, the most immediate one being a second law for intervals between two ordered stopping times _1 and _2:
⟨ΔΣ(_1,_2)⟩≥ - [⟨δ_τ(_2) ⟩ - ⟨δ_τ(_1) ⟩],
where ⟨ΔΣ(_1,_2) ⟩ := ⟨Σ(_2) ⟩ - ⟨Σ(_1) ⟩.
This inequality provides a result applicable to both stochastic stopping and starting times, bounding the entropy production incurred for computations that both start and end at stochastic times.
As an example, inequality (<ref>) provides a bound concerning the stochastic interval between the first time that a DFA enters an accept state, and the earliest subsequent time that
it again enters an accept state, after having left the set of accept states
in between. Then the time up to _1 can be interpreted as the time it took for the DFA to accept a first sub-string of the full input word, and the time between _1 and _2 can be interpreted as as the time it took for the DFA to accept a second sub-string of the full input word, a sub-string which follows the first one. Again, the inequality in <ref> suggests that ⟨ΔΣ(_1,_2)⟩ might eventually become negative for such a case, whenever there is an increasing time-reversal-asymmetry, i.e. for ⟨δ_τ(_2) ⟩ > ⟨δ_τ(_1) ⟩.
Moreover, for the choice _1= and _2=τ, the inequality (<ref>) gives us the following upper bound for the intrinsic mismatch cost at stopping times
⟨Σ(𝒯)⟩≤⟨Σ (τ) ⟩ - ⟨δ_τ(𝒯) ⟩.
The inequality (<ref>) implies that whenever ⟨δ_τ()⟩≥ 0, the intrinsic mismatch cost at stopping times will be upper bounded by its fixed-time counterpart, suggesting a drop in the thermodynamic costs of the computation at stopping times. On the other hand by taking _1=0 and _2= in Eq. (<ref>), we obtain an alternative second-law at stopping times, namely:
⟨Σ(𝒯) ⟩≥ D (ρ_0 || ρ̅_τ) - ⟨δ_τ(𝒯) ⟩ ,
to be compared with Eqs. (<ref>) and (<ref>). Here we have used that Σ(0)=0 and
⟨δ_τ(0)⟩ = ∑_x_0ρ_0(x_0) lnρ_0(x_0)/ρ̅_τ(x_0)= D (ρ_0 || ρ̅_τ).
This inequality provides us an alternative lower bound on the intrinsic cost of the computation. We notice that, while we expect it to be less tight in general than Eq. (<ref>), it has the advantage of relying on the KL divergence between initial distribution ρ_0 and the final distribution in the auxiliary dynamics ρ̅_τ, which we expect to be more easily computable than Γ_τ in Eq. (<ref>).
Remarkably, combining Eqs. (<ref>) and (<ref>) we find a sandwich inequality for the intrinsic mismatch cost at stochastic times,
D (ρ_0 || ρ̅_τ) - ⟨δ_τ(𝒯)⟩≤⟨Σ(𝒯) ⟩≤⟨Σ (τ) ⟩ - ⟨δ_τ(𝒯) ⟩ ,
which provides both upper and lower bounds on ⟨Σ() ⟩.
The stopping time fluctuation relation in Eq. (<ref>) and the inequalities (<ref>)-(<ref>) for the intrinsic thermodynamic costs in computational processes with stochastic stopping times provide our main results. In the following we further discuss their interpretation and some of their implications, while in Section <ref> we investigate their applications to CS setups with some illustrative examples.
§.§ Thermodynamic interpretation and implications
The second-law inequality (<ref>), ⟨Σ(𝒯)⟩≥ - ⟨δ_τ(𝒯) ⟩ [as well the stronger versions (<ref>) and (<ref>)], suggests that both the intrinsic mismatch cost and the underlying entropy production incurred in a given computation may be negative on average when evaluated at stopping times. To understand how this is possible in light of the data-processing inequality we write ⟨Σ(𝒯)⟩ explicitly as the functional (<ref>) averaged over many trajectories that are stopped each at a stochastic time 𝒯:
⟨Σ (𝒯) ⟩ = ∑_𝒯=0^τ
p(𝒯) ∑_t=0^𝒯-1[∑_x_tρ_t(x_t | 𝒯) lnρ_t(x_t)/μ(x_t)
- ∑_x_t+1ρ_t+1(x_t+1 | 𝒯)lnρ_t+1(x_t+1)/μ'(x_t+1)].
Here, p(𝒯) denotes the probability that the stopping time takes value 𝒯.
Similarly, ρ_t(x | 𝒯) denotes the conditional probability that the process takes the value x at time t given that the stopping condition is met at time 𝒯. Because ρ_t(x | 𝒯)≤ρ_t(x) in general, the terms ∑_x_tρ_t(x_t | 𝒯) ln[ρ_t(x_t)/μ(x_t)] and ∑_x_t+1ρ_t+1(x_t+1 | 𝒯) ln[ρ_t+1(x_t+1)/μ '(x_t+1)] are not KL divergences in general, and thus not necessarily greater or equal than zero (see also Ch. 8.3 in Ref. <cit.>). This implies that ⟨Σ (𝒯) ⟩ can in principle be negative. The second law at stopping times (<ref>) permits ⟨Σ (𝒯) ⟩≤ 0 whenever ⟨δ_τ (𝒯) ⟩≥ 0, yet it is not clear when this would be actually the case.
The explicit expression for the stochastic distinguishability at stopping times reads
⟨δ_τ(𝒯) ⟩=∑_𝒯=0^τ∑_ x_𝒯p(𝒯) ρ_𝒯(x_𝒯 | 𝒯) lnρ_𝒯(x_𝒯)/ρ̅_τ - 𝒯(x_𝒯) .
Equation (<ref>) also reveals that ⟨δ_τ(𝒯) ⟩ is not a KL divergence in general, and thus can in principle take any sign, yet so far only examples where ⟨δ_τ(𝒯) ⟩≥ 0 have been reported in the literature. We remark that ⟨δ_τ(𝒯) ⟩ is not a KL divergence unless 𝒯=τ, for which the process “stops" at the deterministic limit time τ, and one has that the joint stopping-time probability distribution
p(𝒯) ρ_t(x_t | 𝒯) =
0
if 𝒯 < τ
ρ_τ(x_τ)
if 𝒯 =τ ,
i.e. it takes the value, at time τ, of the solution of the Master equation. Plugging in Eq. (<ref>) in Eq. (<ref>) one gets ⟨δ_τ(𝒯=τ) ⟩= D (ρ_τ || ρ̅_0)=0 because ρ̅_0=ρ_τ. Analogously for 𝒯=τ, intrinsic mismatch cost ⟨Σ (𝒯=τ) ⟩ takes the expression (<ref>) thus retrieving non-negativity, ⟨Σ (τ) ⟩≥ 0. Note that other examples of negative entropy production at stopping times based on threshold criteria for work were first reported in Ref. <cit.> and for free energy more recently <cit.>. Such gambling demon <cit.> effect is allowed whenever ⟨δ(𝒯)⟩ >0, which is not guaranteed for arbitrary stopping conditions but possible for wise stopping strategies as shown experimentally in Refs. <cit.>.
We can obtain further insight on this effect by decomposing the intrinsic mismatch cost at fixed times τ in two terms, one associated to intervals [0,] up to the stopping time and [, τ] from the stopping time to the limit time τ, that is:
⟨Σ(τ) ⟩ = ⟨Σ() ⟩ + ⟨ΔΣ(τ, ) ⟩≥ 0,
which follows from the fact that is a single-valued function of the trajectory. Since ⟨Σ(τ) ⟩≥ 0, the above decomposition implies that, whenever ⟨Σ() ⟩ <0, such a negative value must be compensated by an incremented mismatch cost ⟨ΔΣ(τ, ) ⟩≥⟨Σ(τ) ⟩ incurred in the interval [, τ], if no external action is taken on the system at time to physically stop the dynamics. These considerations will be valid also in cases where the stopping condition is structurally imposed through the dynamical evolution of the computational process, e.g. using absorbing accept states to “stop" the computation, as it is the case in some models of DFAs.
The role of absolute irreversibility as captured in the stronger inequality (<ref>) with - ln[1- Γ_τ] ≥ 0 makes more difficult the observation of negative average intrinsic mismatch cost, since it would require a higher time-reversal asymmetry in the dynamical evolution leading to large distinguishabilities ⟨δ(𝒯)⟩ > - ln[1- Γ_τ] [and similarly for inequality (<ref>)]. Remarkably, however, the examples explored in Sec. <ref> show how still dissipation can be reduced at stopping times thanks to a positive time-reversal asymmetry ⟨δ(𝒯)⟩ >0, in agreement with Eq. (<ref>) above. This reduction might be linked to the information needed to execute the stopping condition 𝒯, similarly to what happens in feedback control scenarios <cit.>. However a general relation between these two quantities remains unknown.
The second-law inequality at stopping times (<ref>) can be further rewritten using Eq. (<ref>) in a form reminiscent of Landauer's principle:
- ⟨Δϕ(𝒯) ⟩≥ - ⟨Δ S_sys(𝒯) ⟩ - ⟨δ_τ(𝒯) ⟩,
where the l.h.s. accounts for the excess entropy flow dissipated into the environment as a consequence of a drop in Shannon entropy of the computational states, -⟨Δ S_sys(𝒯) ⟩. Again, whenever ⟨δ_τ(𝒯) ⟩ > 0, the above inequality suggests that the entropy flow to the environment may be eventually reduced. Here it is also worth noticing that even in the case in which trajectories are stopped when returned to the initial state (as in the DFA example in <ref>), the average system entropy change at stopping times, namely ⟨Δ S_sys(𝒯) ⟩ = ⟨ S_sys(𝒯) ⟩ - S(ρ_0), with
⟨ S_sys(𝒯) ⟩ = - ∑_𝒯 p(𝒯) ∑_x_𝒯ρ_𝒯(x_𝒯| 𝒯) lnρ_𝒯(x_𝒯),
is non-zero even when x_𝒯 = x_0 for all 𝒯 since in general the distribution ρ_𝒯(x) ≠ρ_0(x), as corresponds to a relaxation process.
The second-law inequalities derived above not only can be applied to assess stochastic stopping times of a computation, but also to stochastic starting times, see Eqs. (<ref>) and (<ref>). This extension allow us to apply our theory to computations that may “stop”
at multiple consecutive times _1 < _2 < ... < _n (see Sec. <ref> for a particular example in a DFA) or to the concatenations of simpler computations that start at a stochastic time, after the previous one is accomplished. We will further elaborate on the application of starting times to the computation of concatenated words with stochastic resetting in Sec. <ref>.
§ APPLICATION TO DETERMINISTIC FINITE AUTOMATA
In this section we analyze minimal yet insightful examples of computations executed by deterministic finite automata (DFA). A computational task for a DFA starts by it receiving a sequence of exogeneously generated symbols, an input string or an input word, ω. As the DFA iteratively processes the symbols of the input string, it makes associated transitions among its possible states. Here we first assume that the sequence of symbols to the DFA are produced in an independent identically distributed (i.i.d.) manner and so the time evolution over the DFA states while processing those strings can be modeled using a DTMC. Then we will move to the case of input symbols that are not produced in an i.i.d. manner, but from a Markovian source. In the following examples, we consider two minimal DFA models that processes binary strings. In the first example involving i.i.d. symbol sources, the DFA under consideration accepts strings which encode binary numbers divisible by four, e.g. 0 (zero), 100 (four), 1100 (twelve), etc. In the second example, involving non-i.i.d. sources, we use a DFA that accepts strings which encode binary numbers divisible by three.
The state of the DFA when a stopping condition is reached (e.g., whenther the DFA enters a designated accept state) defines a computation that the DFA performs on that string. However this computation can be followed by further processing of input symbols up to a limit time τ (e.g. the DFA may exit the accept state in forthcoming iterations). In this sense our results for stopping times can be applied to various situations, for example: (i) computations generated by input words of fixed length τ where we ask about the value of thermodynamic quantities when visiting the accept state for the first (or the n-th) time; and (ii) computations that may actually end when visiting the accept state by some reason (e.g. the accept state is an absorbing state of the DFA or there exists an external mechanism that activates when the accept state is reached to stop the dynamics). In particular, we can always modify a given DFA by removing all edges of the associated directed graph that leave an accepting state. This turns the accept state into an absorbing state (or set of states, if there are more than one accepting states).
§.§ Processing symbols from i.i.d. sources
As mentioned above, consider the DFA from <ref>, initialized to state q_0 with certainty, and that its computation starts by processing a stream of binary letters generated as an i.i.d. sequence of 0s and 1s, with p_0≤ 1 the probability to observe a 0 and p_1=1-p_0 the probability to observe a 1. Under this assumption, the time evolution of the DFA's states follows a DTMC over four computational states q_0,q_1,q_2 and q_3, with transition probabilities as indicated in Fig. <ref> (a). All together, the Markov chain associated with the DFA's dynamics is characterized by its initial state
ρ_0=[1 0 0 0]^†
with † denoting here matrix transposition, and the transition matrix
W= [ p_0 0 p_0 0; p_1 0 p_1 0; 0 p_0 0 p_0; 0 p_1 0 p_1 ].
It follows that for t=1 we have
ρ_1 = 𝐖ρ_0=[ p_0 p_1 0 0 ]^†,
whereas for larger times t≥ 2,
ρ_t = 𝐖^tρ_0
=[ p_0^2 p_0p_1 p_0p_1 p_1^2 ]^†≡π,
i.e., the dynamics already reaches the stationary state at the second iteration.
For computing the auxiliary dynamics for this DFA's DTMC, we would need to identify the reference distribution r(x) appearing in Eq. (<ref>) as the prior μ(x) minimizing the mismatch cost sum in Eq. (<ref>) or ν(x) leading to its minimum in Eq. (<ref>). For simplicity, here we assume r(x) = π(x), the stationary state of the DFA dynamics. This is a reasonable assumption as long as the induced DTMC is aperiodic, irreducible, and π has full support over the computational states. As discussed before, since Σ becomes non-extensive in time in this case, there are reasons to expect minimal dissipation in the steady state (see also Refs. <cit.>).
The auxiliary dynamics starts in ρ̅_0=ρ_τ, which can take two possible values depending on the value of the final maximum time of the computation τ: If τ=1 we have ρ̅_0 = ρ_1, whereas for τ≥ 2 we have ρ̅_0 = π.
Following Eq. (<ref>) for the transition probability, with r = π, the
stationary distribution given in (<ref>), we obtain the transition matrix associated with the auxiliary dynamics:
W= [ p_0 p_0 0 0; 0 0 p_0 p_0; p_1 p_1 0 0; 0 0 p_1 p_1 ],
as illustrated in Fig. <ref> (b). It then follows that for computations ending at τ = 1 we have ρ̅_1 = 𝐖ρ̅_0 = 𝐖ρ_1 =[ p_0 0 p_1 0 ]^†. On the other hand, since by construction, the auxiliary dynamics Eq. (<ref>) will always preserve the steady state for r = r'=π, it follows that in the case τ≥ 2, the auxiliary dynamics is stationary at all times t, that is ρ̅_t ^(τ) =𝐖^t ρ̅_0 = 𝐖^t π = π, with π given by Eq. (<ref>).
The intrinsic mismatch cost in Eq. (<ref>), evaluated over a trajectory 𝐱_[0,τ], reduces in this case to the (discrete-time) stochastic non-adiabatic EP:
Σ(𝐱_[0, τ]) = ∑_t=0^τ -1[ lnρ_t(x_t)/π(x_t) - lnρ_t+1(x_t+1)/π(x_t+1)] ,
= lnρ_0(x_0)/π(x_0) - lnρ_τ(x_τ)/π(x_τ)
with x_t ∈𝒳 = {q_0,q_1,q_2,q_3 } for all t, which only depends on the initial and final states.
Having obtained the system probability distribution at all times for the original and auxiliary dynamics, we are now ready to compute thermodynamic quantities at stopping times. In particular we consider the family of stopping times
𝒯 = min (𝒯_1,τ)
with τ fixing a time horizon and 𝒯_1≥ 1 the first time the DFA returns to the accept state q_0, hence accepting a word as a multiple of four (including “0"). From numerical simulations, we obtained sample histograms for the stopping time 𝒯 given by Eq. (<ref>) for three different choices of the limit time τ, see Fig. <ref>. There we observe the first peak at 𝒯=1 in the three plots, corresponding to the cases where the first incoming symbol is "0" and the word is then accepted. In order to allow longer accepted words we need τ > 2, such that 𝒯_1 = 3 (accepting four “100") or 𝒯_1 = 4 (accepting twelve “1100"), etc. Notice however that with the stopping condition given in Eq. (<ref>) we do not capture the acceptance of some of the multiples of four like e.g. eighth “1000”, since the stopping condition would be already verified at previous symbol of the string, “100”, corresponding to four. Same happens for any other accepted number to which an arbitrary number of zeros are attached at the end. For assessing the acceptance of such numbers extra stopping conditions such as _n i.e. the n-th time the DFA resturns to the accept state q_0, are needed (see example in Sec. <ref>).
For all trajectories in which 𝒯=𝒯_1< τ, i.e., the word is accepted before the limit time τ is reached, we have x_𝒯_1=q_0, the accept state, and thus
Σ(𝒯_1)
= -lnρ_𝒯_1(q_0) =
-ln p_0
if 𝒯_1 = 1
-2ln p_0
if 𝒯_1 ≥ 2 .
and the stochastic distinguishability in Eq. (<ref>) is:
δ_τ(𝒯_1) = lnρ_𝒯_1(q_0)/ρ̅_τ - 𝒯_1(q_0) =
-ln p_0
if 𝒯_1 = 1
0
if 𝒯_1 ≥ 2 .
where we used the fact that (by construction) τ > 𝒯_1 ≥ 1 and hence ρ̅_τ - 𝒯_1 = π.
If however 𝒯_1 ≥τ, the dynamics stops at the maximum time 𝒯=τ, independently of the state x_τ, and we obtain:
Σ(τ)
= - lnρ_τ(x_τ)π(q_0)/π(x_τ) =
-ln p_0
if τ = 1
-2ln p_0
if τ≥ 2 .
Note that the case τ≥ 2 is independent of x_τ because the system has already reached its stationary state, and thus Σ(τ) = -lnπ(q_0)=-2ln p_0 for all x_τ≠ q_0. On the other hand, the stochastic distinguishability verifies δ_τ(τ) = 0 since ρ̅_0 = ρ_τ always.
Using the above calculations we obtain the average intrinsic mismatch cost at the stopping time (<ref>) for all τ≥ 2 as:
⟨Σ(𝒯) ⟩ = (p_0-2)ln p_0 ≥ 0,
which follows from
⟨Σ(𝒯) ⟩ = -P(𝒯_1=1)ln p_0 - ∑_t=2^τ P(𝒯_1=t) 2ln p_0
- P(𝒯_1> τ) 2ln p_0 = - p_0 ln p_0 - (1 - p_0)2 ln p_0
= (p_0-2)ln p_0,
where we have used P(𝒯_1=1) = p_0 and thus P(𝒯_1 > 1) = 1 -p_0. In addition, using Eq. (<ref>), we obtain the average stochastic distinguishability at the stopping time (<ref>) for all τ≥ 2:
⟨δ_τ(𝒯)⟩ = -p_0ln p_0≥ 0,
where again we have used P(𝒯_1=1) = p_0.
Notice that the above expressions remain also valid in the limit of large input word lengths, τ→∞.
To tackle the contribution from absolute irreversibility at stopping times, it is convenient to first identify which trajectories contribute to Γ_τ in Eq. (<ref>). These are trajectories that are stopped at 𝒯≤τ (either with or without reaching q_0) and have zero probability to occur in the original dynamics. Note that the original dynamics is a Markov chain with initial state ρ_0 (x) = δ_x,q_0. The set of absolutely irreversible trajectories at stopping times consists of two sets: (i) trajectories that do not start in q_0 and reach q_0 with 𝒯≤τ in the original dynamics, (ii) trajectories of length τ that do not start at q_0 and do not reach q_0 in the original dynamics.
Let us now flesh out the list of such trajectories 𝐱_[0,𝒯] classified by the value of 𝒯 for the special case τ = 2:
* q_2 q_0 reaches the accept state at 𝒯=1 yet it has zero probability to occur in the original dynamics with ρ_0(q_2)=0.
* q_1 q_2 q_0 and q_3 q_2 q_0 reach the accept state at 𝒯=2 yet they have zero probability to occur in the original dynamics since ρ_0(q_1) = ρ_0(q_3) =0.
* q_1 q_2 q_1, q_1 q_3 q_2, q_1 q_3 q_3, q_2 q_1 q_2, q_2 q_1 q_3, q_3 q_2 q_1, q_3 q_3 q_2, and q_3 q_3 q_3
are stopped at 𝒯=2 without reaching the accept state. They have zero probability in the original dynamics because their initial state is different from q_0.
All the sequences listed above are such that they would halt the computation at the stopping time 𝒯=min(_1,2), they have non-zero probability in the auxiliary dynamics but zero probability in the original dynamics.
In order to calculate the absolute irreversibility correction term Γ_τ in Eq. (<ref>) we thus need the probability of the above trajectories to occur in time-reversed order in the auxiliary dynamics. More precisely, one needs to compute P̅(Θ𝐱_[0,𝒯]) multiplied by ρ̅_τ - 𝒯(x_𝒯)/ρ_𝒯(x_𝒯) = π(x_𝒯)/ρ_𝒯(x_𝒯), which in this case is equivalent to modify their initial condition to π(x_𝒯), i.e. to compute the following path probabilities:
P̅(q_0,q_2|q_0) π(q_0) = p_0^2 p_1
P̅(q_0,q_2,q_1|q_0) π(q_0) = p_0^2 p_1 p_0
P̅(q_0,q_2,q_3|q_0) π(q_0) = p_0^2 p_1 p_1
P̅(q_1,q_2,q_1|q_1) π(q_1) = p_0 p_1 p_1 p_0
P̅(q_1,q_2,q_3|q_1) π(q_1) = p_0 p_1 p_1 p_1
P̅(q_2,q_1,q_2|q_2) π(q_2) = p_0 p_1 p_0 p_1
P̅(q_2,q_3,q_3|q_2) π(q_2) = p_0 p_1 p_1 p_1
P̅(q_2,q_3,q_1|q_2) π(q_2) = p_0 p_1 p_1 p_0
P̅(q_3,q_3,q_3|q_3) π(q_3) = p_1^2 p_1 p_1
P̅(q_3,q_3,q_1|q_3) π(q_3) = p_1^2 p_1 p_0
P̅(q_3,q_1,q_2|q_3) π(q_3) = p_1^2 p_0 p_1.
Summing up all the contributions in Eq. (<ref>) leads us to the absolute irreversibility contribution [cf. Eq. (<ref>) for the general formula]:
Γ_2 = p_0 p_1 ( p_0 + p_0^2 + 4 p_1 p_0+ 4p_1^2 ) + p_1^4
= 1-p_0^2.
Combining all the terms above, we observe that for the stopping time 𝒯 = min (𝒯_1,2):
⟨Σ(𝒯) ⟩ = -⟨δ_2(𝒯)⟩ - ln [1-Γ_2].
In other words, the second law at stopping times given by Eq. (<ref>) is saturated over the stopping time given by Eq. (<ref>) for τ = 2, as it is illustrated in Fig. <ref> for different values of the probability of incoming zeros, p_0. As can be appreciated in that figure, the positive sign of the term ⟨δ_2(𝒯)⟩ >0 implies that the intrinsic mismatch cost at stopping times ⟨Σ(𝒯)⟩ = -2ln p_0 + p_0ln p_0, see Eq. (<ref>), is smaller than its value at fixed times
⟨Σ (τ ) ⟩ = -2 ln p_0 ≥⟨Σ (𝒯 ) ⟩,
in spite of the presence of the absolute irreversibility contribution with Γ_2. For τ > 2 we have Γ_τ≤Γ_2, which follows by combining the equality in Eq. (<ref>) with the generic bound in Eq. (<ref>). In any case, the inequality ⟨Σ (τ ) ⟩≥⟨Σ (𝒯 ) ⟩ holds for any limit time τ for this example.
When p_0 approaches 1 (words with a high number of zeros) the dynamics cannot escape from the initial state q_0 and the steady state π becomes equal to the initial distribution ρ_0. In this limit, the DTMC dynamics becomes fully stationary and hence the intrinsic mismatch cost becomes zero for every trajectory, the time-reversal asymmetry is lost, and the absolute irreversibility is no longer present, leading to a drop in the three quantities on the RHS of Fig. <ref>. As we move away from that limit, the mismatch cost increases (both at stopping and fixed times), signaling the energetic costs incurred by the computational task, which grow as p_0 decreases. This can be justified by the fact that the dynamics on the DTMC spreads more easily over all computational states as p_1 increases (see Fig. <ref> a), leading to a greater distinction between initial and steady-state distributions. In this case we also observe non-zero stochastic distinguishability and an increasingly large absolute irreversibility term. In the limit p_0 → 0 (words with a high number of ones) accepting a word becomes almost impossible, and hence the stopping occurs most probably at the maximum time 𝒯≃τ, leading again to zero stochastic distinguishability. We notice that in this limit π tends to localize at state q_3 and hence it would lead to ⟨Σ(τ) ⟩→∞, which is not physically meaningful. The catch point is that in this limit the fixed point π would not be equal to the prior μ or ν anymore.
§.§ Uniform prior
We now implement the analysis in Sec. <ref> for a different setting, where the strings are generated i.i.d. and we consider the same four-states DFA, now with a uniform prior distribution over its states
r = [ 1/4 1/4 1/4 1/4 ]^†
,
The evolution of r under 𝐖 after one iteration yields
r' = 𝐖 r =
[ p_0/2 p_1/2 p_0/2 p_1/2 ]^†
.
Because r changes after one iteration, we write Σ as in Eq. (<ref>) for τ>0
Σ(𝐱_[0, τ])
= lnρ_0(x_0)/ρ_τ(x_τ) + ∑_t=0^τ -1lnr'(x_t+1)/r(x_t),
where we the first term is the system entropy change Δ S_sys(𝐱_[0,τ]) and the second one the nonequilibrium potential Δϕ(𝐱_[0,τ]) in Eq. (<ref>). Unlike for the stationary prior, now this term is extensive with time [cf. Eq. (<ref>)]. Note that in this case Σ is no longer equal to the non-adibatic EP associated with the stochastic trajectory 𝐱_[0, τ].
The uniform distribution is not invariant under the map 𝐖, hence the intrinsic mismatch cost Σ associated with a stochastic trajectory 𝐱_[0, τ] is extensive with time. This implies that, unlike for the case of stationary prior (see Sec. <ref>), the averages of Σ at fixed times τ as well as at stopping times with limit time τ [of the form of Eq. (<ref>)], will crucially depend on τ. This is also the case for any other choice for the prior distribution which differs from the stationary distribution.
In Fig. <ref> we show the intrinsic mismatch cost ⟨Σ(𝒯)⟩ at the stopping time stopping time 𝒯 = min (𝒯_1,τ) for the DFA with the uniform prior for two different values of τ, and compare it with the case of stationary prior, Eq. (<ref>). We observe that the uniform prior leads to higher values for the intrinsic mismatch cost for high values of p_0, while for low p_0 values the tendency can be inverted. However when increasing τ sufficiently we always obtain a lower cost for the stationary prior, as expected from its non-extensivity. Indeed we observe a tendency for the mismatch cost at stopping times ⟨Σ(𝒯)⟩ to saturate when increasing the limit time τ, in contrast with the linear scaling of ⟨Σ(τ)⟩ with τ. In Appendix <ref> we confirm this point by studying in more detail the scaling behaviour of these two quantities as a function of τ.
We test the sandwich inequality in Eq. (<ref>) comprising the upper and lower bounds to ⟨Σ(𝒯)⟩ in Eqs. (<ref>) and (<ref>), respectively. As can be appreciated in Fig. <ref>, both inequalities provide useful bounds that become tighter for small τ, and are simultaneously saturated at the point p_0=1/2. This example also reveals that again there is a reduction of intrinsic costs at stopping times with respect to fixed times, that is ⟨Σ(τ)⟩≥⟨Σ(𝒯)⟩ holds over the entire parameter range of probability of symbol 0, p_0, and the limit time τ, as shown in the inset of Fig.<ref>. This reduction is guaranteed by a positive value of the stochastic distinguishability ⟨δ() ⟩ >0 in the range p_0 ≥ 1/2 [c.f. Eq. (<ref>)] but, interestingly, it is also verified even for ⟨δ() ⟩ <0 as it happens for p_0 ≤ 1/2.
§.§ Beyond i.i.d. sources
So far we have analyzed the statistics of a DFA processing inputs generated by a source of i.i.d. bits, which induces a Markovian dynamics for the time-evolution of the computational states. This is, however, one of the simplest possible computational processes, as e.g., regular languages recognized by DFAs are often composed of correlated words. To illustrate the applicability of our theory to computing thermodynamic costs of DFAs processing arbitrary strings from arbitrary languages, it is mandatory to consider DFAs processing non-i.i.d. sequences.
In processing a generic non i.i.d. sequence, the dynamics over the computational states of a DFA is in general a non-Markovian process. However, one can extend the computational state space such that our formalism can be applied. For the analysis in this section it is important to remark the distinction between the “computational states” of the DTMC computational state space 𝒳, and the states of the DFA. In particular, we refer to states of the DFA (as in the usual TCS definition) as logical states of the DFA, and recall that with “computational states” we refer to the sets of variables which describe the entire state-space for a computational process of interest, as introduced in Sec. <ref>.
Now consider that the (process generating the) input string itself is a DTMC characterized by time-independent transition probabilities p(b_i+1|b_i) for the (i+1)’th bit to be equal to b_i+1={0,1} given that the i’th symbol (bit) of the string is b_i={0,1}. In this case, the logical state of, e.g., a three-state DFA z_t={q_0,q_1,q_2} processing this input string is not a DTMC, although by constructing the computational state space as the Cartesian product of z_t={q_0,q_1,q_2} and b_t={0,1}, we encode the current computational state x_t = {z_t, b_t} as the logical state of the DFA z_t and the most recent input symbol fed to the DFA b_t. In this case, one is left with a DTMC with six possible computational states, for which our formalism can be readily applied to tackle the thermodynamic properties.
As an example, we consider the minimal DFA that accepts binary multiples of 3, shown in Fig. <ref> (a), leading to the DTMC represented in Fig. <ref> (b). The probabilities to obtain input bits 0 or 1 given the last intput symbol are fixed and denoted by p(i|j):=p_ij for i,j={ 0, 1}. They satisfy p_00 + p_10 =1, and p_01 + p_11 =1. Ordering the computational states distribution as ρ_t = [ρ_t(q_0, 0), ρ_t(q_1, 0),ρ_t(q_2, 0), ρ_t(q_0, 1), ρ_t(q_1, 1), ρ_t(q_2, 1)], we obtain a 6 × 6 transition matrix given by:
W= [ p_00 0 0 p_01 0 0; 0 0 p_00 0 0 p_01; 0 p_00 0 0 p_01 0; 0 p_10 0 0 p_11 0; p_10 0 0 p_11 0 0; 0 0 p_10 0 0 p_11 ].
We choose as the initial condition the probability distribution:
ρ_0=[1/2 0 0 1/2 0 0]^†,
with initial equal probabilities over the DTMC states corresponding to q_0 as the DFA start state and zero otherwise.
We assume that the underlying entropy production is minimized for the uniform prior
r = [1/6 1/6 1/6 1/6 1/6 1/6]^†,
which is transformed, after one iteration, into r^' = 𝐖 r:
r^' = [a/6 a/6 a/6 b/6 b/6 b/6]^†,
with a:=p_00 + p_01 and b:=p_10 + p_11.
Using the above definitions we can compute Σ(τ) in Eq. (<ref>), at arbitrary fixed times. Moreover, in order to evaluate thermodynamic quantities at stopping times, we embrace again the family of stopping times 𝒯 = min (𝒯_1,τ) with τ the fixed time horizon and 𝒯_1≥ 1 the first time the DFA returns to the accept state q_0 for either b={0, 1}.
We show numerical results in Fig. <ref>, where ⟨Σ(𝒯)⟩, together with the corresponding upper and lower bounds given by Eqs. (<ref>) are plotted as a function of the probability p_01=1 - p_11 to obtain symbol 0 after a symbol 1, for different values of p_00= 1 - p_10. Again we obtain relevant bounds on the intrinsic mismatch cost at stopping times, which, interestingly, become tightest when p_00=1-p_01, i.e. when p_00 = p_11 and p_10=p_01.
This corresponds to the situation in which the input sequence is a Markovian process with homogeneous stationary probabilities, p_0^ st = p_1^ st=1/2. The fact that our bounds become tight for homogeneous input sequences was also observed for the i.i.d. example (see Fig. <ref>) and makes us conjecture that this phenomenon may be generic to correlated input sequence, maybe also non-Markovian.
As also commented for the previous examples, however, using a stopping time of the form 𝒯 = min (𝒯_1,τ) allows us to describe computation times for the DFA to reach the accept state for the first time. That corresponds to the acceptance of only some of the multiples of three, e.g. “0" (zero), “11" (three), “1001" (nine), but not other multiples like “110" (six) or any other word that already contains an acceptable prefix. In order to explore thermodynamic costs associated to these words we now consider more general stopping times = min(_n, τ), where _n is the n-th time the DFA returns to the accept state. Therefore _2 is related with the acceptance of words like “110" (six) or “10010" (eighteen), while _3 corresponds to accept words like “1100" (twelve), among many others.
In Fig. <ref> we plot ⟨Σ(𝒯)⟩ with = min{_n, τ} as a function of the return time to the accept state, n = 1, 2, 3, 4 ,5. We notice that different behaviors are obtained depending on the choice of input symbols probabilities, p_00 and p_01, leading to either increasing values of the intrinsic mismatch cost or a non-monotonic behavior. Interestingly considering different stopping times allows us to test inequality (<ref>) for two stopping times, which is shown in the inset of Fig. <ref> for _1 = min(_n-1 , τ) and _2 = min(_n, τ) as a function of n>1.
In particular we observe that the mismatch cost between consecutive returning times to the accept state can be eventually negative for specific choices of parameters (probabilities p_00 and p_01), that is, ⟨ΔΣ (_n, _n-1) ⟩ < 0 for n= 3, 4, 5, owing to a reduction in the associated stochastic distinguishability and despite having ⟨Σ(τ) ⟩≥ 0 at fixed times.
§ UNIVERSAL EQUALITIES AND INEQUALITIES FOR ACCEPTANCE PROBABILITIES
Our formalism can be further applied to address other issues in computer science theory, beyond automata literature, and besides second laws and fluctuation theorems at stopping times. Both in this Sec. <ref> and in Sec. <ref> we develop further theoretical predictions for key statistical properties of interest for computer science that may inspire numerical and experimental illustrations of future work.
An example that we develop in this section is using our formalism to establish universal equalities and inequalities concerning the probabilities of acceptance or rejection of sets of distinct bit sequences when a given DFA is implemented.
In what follows we focus on a specific choice of such sets, namely, 1) the set of all strings or trajectories that end in an accept state before the limit time τ vs. 2) the set of all trajectories that do not end in an accept state before the limit time τ. However, we emphasize that this formalism can be generalized to arbitrary pairs of sets of trajectories, by specifying suitable filtrations as done in martingale approaches.
Thus we will explore a class of simple examples of “acceptance" statistics for binary words of length τ≥ 2 that are processed by a computer. We will use the notation accept to signify that a computer reaches a prescribed accept state before the limit time τ, and reject otherwise. The probabilities P_ a(τ) denotes the probability for the computer to have reached the accept state within [0,τ], and P_ r(τ)=1-P_ a(τ) the probability for the complementary. Recall that for simple computer architectures (e.g., DFAs processing i.i.d. binary strings), P_ a(τ) and P_ r(τ) can often be evaluated analytically or with Monte Carlo simulations. The approach we reveal below is complementary to Monte Carlo approaches in such simple computations, however we highlight its usefulness in revealing how such accept/reject statistics are related to thermodynamic quantities. Note that here those thermodynamic quantities can be used as a tool of calculation, determined completely by the computer update function and the distribution over input words. In particular, they need not correspond to any “real” thermodynamic quantities that one would measure in the laboratory. That is, our formalism provides a way to derive the relative probabilities of accepting or rejecting a string while sidestepping the conventional technical difficulties found in the traditional approaches to this issue <cit.>. On the other hand, one can also interpret the results presented below as a way for obtaining information about the intrinsic thermodynamic costs of computations by looking at the acceptance probabilities (of languages solved by machines), which might be calculated by other means, such as Monte Carlo approaches.
So we consider again a stopping time 𝒯= min(𝒯_1,τ), which signifies the first time that the computer reaches the accept state, 𝒯_1, or the limit τ in the case that the accept state is not visited before τ. So 𝒯 < τ if a word of length τ-1 is accepted by the computer. Otherwise, 𝒯= τ, if the word is not accepted before τ.
The probabilities that a word of length τ-1 is accepted or not are then given by
P_ a(τ) = P(𝒯< τ),
P_ r(τ) = 1-P(𝒯< τ) = P(𝒯 = τ),
respectively. We now make use of our formalism to derive bounds for P_a(τ) and P_r(τ) in terms of thermodynamic quantities.
Using our fluctuation theorem at stopping times with absolute irreversibility, Eq. (<ref>), ⟨ M_τ() ⟩ = 1- Γ_τ, we expand its l.h.s. into terms corresponding to accepted and rejected words as:
P_ a(τ) ⟨ M_τ() | < τ⟩ + P_ r(τ) ⟨ M_τ() | = τ⟩,
with ⟨ A()| c() ⟩ =𝔼(A() | c()) being the conditional average of functional A over trajectories x_[0,] given that the condition c() is fulfilled over the stopping time . Upon using P_ r(τ) = 1- P_ a(τ), the decomposition (<ref>) gives us the following relation between the acceptance probability and the averages of the supermartingale M_τ() at stopping times:
P_ a(τ) = 1 - Γ_τ - ⟨ M_τ() | = τ⟩/⟨ M_τ() | < τ⟩-⟨ M_τ() | = τ⟩.
Equality (<ref>) generalizes analytical expressions obtained in previous works for absorption probabilities <cit.> by including the absolute irreversibility contribution Γ_τ. As can be appreciated in Eq. (<ref>), since Γ_τ≥ 0, the role of absolute irreversibility is to decrease the acceptance probability P_ a(τ) of a word of lenght τ -1 by the DFA. This can be intuitively understood from the fact that starting computation from a restricted set of initial states can only decrease the velocity at which the computational state space is explored, and hence the probability to reach a generic stopping condition before time τ.
Since P_ a(τ) is a well-defined probability (i.e. 0 ≤ P_ a(τ) ≤ 1), we further obtain from Eq. (<ref>) that one of the two following chain inequalities hold:
⟨ M_τ() | < τ⟩≥ 1 - Γ_τ≥⟨ M_τ() | = τ⟩,
⟨ M_τ() | = τ⟩≥⟨ M_τ() | < τ⟩≥ 1 - Γ_τ≥ 0,
which provide us constrains on the values of M_τ() for generic of the form 𝒯= min(𝒯_1,τ).
Analogously, we can exploit the second-law-inequality at stopping times (<ref>), namely ⟨Σ(𝒯) ⟩≥ - ⟨δ_τ(𝒯) ⟩ + D (ρ_0 || ρ̅_τ), to derive universal bounds for the finite-time acceptance probability. Indeed, average of the left-hand-side of this equation at the stopping time 𝒯= min(𝒯_1,τ) can also be decomposed into two terms, accounting, respectively, for accepted and rejected words of maximum length τ-1:
⟨Σ(𝒯) + δ_τ(𝒯) ⟩ = P_ a(τ) C_ a(τ) + P_ r C_ r(τ),
where we have introduced the conditional averages
C_a(τ) =⟨ Σ(𝒯) + δ_τ(𝒯) | 𝒯<τ⟩,
C_r(τ) =⟨Σ(𝒯) | 𝒯= τ⟩.
Note that in Eq. (<ref>) we have used the fact that δ_τ(τ)=0. We refer to these two conditional averages as the average thermodynamic costs associated with the acceptance and rejection of words of length τ-1, respectively.
Combining Eqs. (<ref>) and Eq. (<ref>) we obtain the two following lower and upper bounds for the acceptance probability
P_ a (τ)≥D (ρ_0 || ρ̅_τ) - C_ r(τ)/C_ a(τ)-C_ r(τ),
valid whenever C_ a(τ) > C_ r(τ), and similarly
P_ a (τ)≤ C_ r(τ)- D (ρ_0 || ρ̅_τ) /C_ r(τ)-C_ a (τ),
valid in the complementary case when C_ r(τ) > C_ a(τ). These bounds express a constraint on the acceptance probability of a word with maximum lenght τ-1 in terms of the average costs associated with the accepted and rejected words as defined in Eqs. (<ref>) and (<ref>), and the KL divergence between the initial distribution of the computational state and the final distribution of the computational state under the auxiliary dynamics [see Eq. (<ref>)].
Equation (<ref>) provides a meaningful bound whenever its r.h.s is non-negative and smaller than one, i.e. when C_a(τ)≥ D (ρ_0 || ρ̅_τ) ≥ C_r(τ). On the other hand, the bound (<ref>) is meaningful when C_a(τ)≤ D (ρ_0 || ρ̅_τ) ≤ C_r(τ). We expect the first condition to be satisfied if the probability of accepted words is large enough so that the associated cost C_ a(τ) is larger than the cost of rejected words C_ r(τ). So we expect the bound (<ref>) to be helpful for parameter values of the DFA and distribution over input words in which the acceptance rate is high. On the contrary when the probability of rejected words is large enough, we expect C_ r(τ) to be larger than C_ a(τ), and the bound (<ref>) to be useful when the acceptance rate is low.
The above relations in Eq. (<ref>) and Eqs. (<ref>) and (<ref>) concerning the acceptance probability of a word can also be applied to any finite-horizon stopping time of the form 𝒯= min(𝒯_c,τ), where 𝒯_ c represents the time at which a given arbitrary condition c is verified for the first time, e.g., the first time the accept state is reached twice, or the first time the accept state is reached after passing through any other arbitrary state (or sequence of them). Thus there is an ample flexibility in choosing the stopping condition , including the logical composition of any other set of conditions, e.g., c = c_1 ∪ c_2 giving the first time either condition c_1 or condition c_2 are verified, or c = c_1 ∩ c_2 for the fist time both c_1 and c_2 are simultaneously verified.
§ CONCATENATING RUNS OF A DFA WITH STOCHASTIC RESETTING
In this section we further elaborate on how our results would be applied to sequences of computations separated by a reset of the dynamics which implements concatenated computational rounds. This is an interesting avenue where our results might be fruitfully combined in the future with the powerful analytical tools from the framework of stochastic resetting <cit.>. Let us consider a random sequence of symbols fed into a computer,
0 0 0 ⊔ 0 1 0 1 1 1 ⊔ 0 1 0 …,
where ⊔ is a blank symbol that flags the beginning of a new computation. For the example sequence (<ref>), a computation starts at the random starting time _ start=5 and ends at the stochastic ending time _ end=10 just before the next blank symbol arrives, thus generating the input word “010111". During this computation, the computer begins computing at _ start=4 from its start state, and ends the computation either in an accept state or in another logical state.
Now, stochastic starting times can be reformulated as stochastic stopping times [see also our results concerning multiple stopping times, Eq. (<ref>)].
In particular here the starting time _ start is the first appearance of a blank symbol ⊔. Whenever the probability of a blank symbol p_⊔>0 is greater than zero, then it is guaranteed that P(_ end <∞)=1, i.e., there is a limit time τ that is a finite global upper limit to _ end. This is the setting which would correspond to, e.g., stochastic starting times that are drawn from distributions with bounded support, say from Bernoulli or Binomial distributions. Under such mild assumptions, it is then possible to establish thermodynamic constraints for computations starting at stochastic times.
Supposing p_⊔>0, we outline how the stopping-time fluctuation relations derived in our work can be applied to a computation of the example sequence (<ref>). First, we let the computer processes the sequence “000", which implies visiting the accept state at least once.
At time t=3, the computer may or may not be in the start state depending on its update rules. Next at t=4, the state of the computer is reset to its start state from whichever state x_3 it occupies at the previous time instance. Upon this, the computer processes the string 010111 before the arrival of the next blank symbol, during which the logical state may or may not have reached the accept state. This leaves us with an ordered sequence of stopping times: T_0=0, _1 = min(_ accept^(1),_ blank^(1)), _2 = _ blank^(1), _3 = min(T_ accept^(2),T_ blank^(2)), _4 = T_ blank^(2), ..., _∞ = τ,
which obey
_0≤_1≤_2≤_3≤_4≤…≤_∞.
Here above we have denoted by _ accept^(i) the first return time to the accept state during the computation of the i-th word. Similarly, _ blank^(i) is the stochastic arrival time of the i-th blank symbol. While the stochastic times _ accept^(i) have the same structure as the stopping times considered throughout our work, the times _ blank^(i+1) can be seen as stochastic starting times, which are also examples of stopping times for which our formalism applies.
Figure <ref> provides an illustration of a DFA processing an i.i.d. sequence of bits interspersed by blank symbols. The DFA processing the symbols recognizes binary words multiples of four, as in the examples of Sec. <ref>.
Assuming time-independent probabilities p_0, p_1 and p_⊔ for the occurrence of 0, 1, and blank ⊔ symbols respectively (with p_0+p_1+p_⊔ =1), the DTMC associated with this computation can be represented by a discrete-time stochastic resetting process (see Fig. <ref>). In such processes, resetting takes place from each logical state to the start state q_0 at a stochastic starting time. This requires a suitable description of computation whose transition matrices include resetting events. For the DFA example considered here, such transition matrix takes the form
W= [ p_0+p_⊔ p_⊔ p_0+p_⊔ p_⊔; p_1 0 p_1 0; 0 p_0 0 p_0; 0 p_1 0 p_1 ],
cf. Eq. (<ref>) for the case where no resetting takes place, corresponding to p_⊔=0. The DTMC described by the transition matrix (<ref>) allows one to study multiple realistic computational scenarios where Σ at stochastic starting and stopping times can be efficiently tackled. For example, one may consider that the processing of the input string by the DFA as a nonequilibrium stationary process with resetting and apply results from the martingale theory for stationary processes (see Ch. 7 in Ref. <cit.>). Alternatively, one can apply the formalism in this work to establish bounds for the intrinsic mismatch costs of the computation between the first and the n-th arrival of a blank symbol, etc [see Eq. (<ref>)], similarly to Sec. <ref>c.
§ DISCUSSION
In this work we have shown how to extend stochastic thermodynamics to describe the minimal costs associated with a computer processing with a stochastic halting time, processing strings of arbitrary length. Our formalism applies to computations described by discrete-time Markov chains over a set of computational states that may have restricted initial conditions, unidirectional links, and start and/or stop at a stochastic time. We obtain quantifiers, which are collectively dubbed as the intrinsic mismatch cost of a computation, that lower-bound the entropy production incurred by the computer and that can be formulated at the fluctuating level. A key insight here is that these quantifiers, which provide a tool to probe the entropy production associated with computations at stopping times, can be entirely obtained from the DTMC evolution and the prior, without further details about their physical implementation. Notice that such an intrinsic cost is independent of the internal energy of the computational states, x_t ∈𝒳, which can be indeed assumed to be equal for every computational state and constant over time. Still, non-zero entropy production through the irreversible dissipation of heat into the environment will be in general incurred for any physical computer which implements a given computation over such set of states.
Putting forward the modern martingale formalism of stochastic thermodynamics, we also unveiled a plethora of universal fluctuation relations and inequalities that are valid for the broad class of computations analyzed in this work. We obtained a main fluctuation theorem, Eq. (<ref>), valid for settings which include arbitrary stopping times, unidirectional transitions, and absolute irreversibility. In doing that, we have extended the martingale theory for stochastic thermodynamics to account for this additional source of irreversibility in generic situations, which we expect to have broad applicability in nonequilibrium thermodynamics.
The rigor and flexibility of our theory for stopping times allowed us to formulate and interpret several second-law-like inequalities [Eqs. (<ref>)–(<ref>)] at stochastic stopping times, as well as relations for the probabilities of acceptance/rejection of input data by a computer in terms of thermodynamic quantities [equality (<ref>) and inequalities (<ref>)-(<ref>)]. In particular, the second law inequalities (<ref>) and (<ref>), provides us useful lower bounds on the minimum dissipation incurred by a generic computation stopping at an arbitrary stopping time, while Eq. (<ref>) establishes formally how stopping times can be used to reduce the thermodynamic costs of a computation by means of time-reversal-symmetry breaking. Moreover, we have also shown the relevance of accounting for absolutely irreversible sequences in providing accurate bounds for the intrinsic mismatch cost of the computation. In this sense, the bound we derived in Eq. (<ref>) with the absolute irreversibility term Γ_τ is tighter, with respect to the alternative bound in Eq. (<ref>). However, computing Γ_τ might be challenging depending on the setting considered, specially for large limit times τ. On the contrary, the alternative bound in Eq. (<ref>), would be much easier to compute (as it only depend on two probability distributions), while still providing a meaningful bound in all examples explored here.
The framework developed in this paper can be readily applied for assessing thermodynamic costs of computations in a broad range of models of relevance in CS theory, including –but not being limited to– deterministic finite automata. Our results apply to every computation implemented by a synchronous digital computer and remains valid independently of how the computational variables are defined. In particular they can include already processed input symbols (as in non-iid DFAs), stacks (as in pushdown automata models), or even entire words written on a random access tape (as in Turing machines). Hence our results provide a tool to classify abstract computational machines by their intrinsic (unavoidable) thermodynamic costs. Applying our framework to more complex models of computational machines such as pushdown automata or Turing machines halting at stochastic times is a natural step following the investigation initiated here.
It would also be very interesting in the future to extend the framework developed here by combining analytical tools from stochastic resetting (e.g. renewal theory and first-passage-time ideas <cit.>) with computer science methods. This will allow to obtain tight bounds for the statistics of starting time and entropy production bounds in specific models of DFAs and TMs processing regular languages, as follows from the ideas sketched in Sec. <ref>.
Also, we note that even if current digital devices are very close to periodic, they are not exactly so. In other words, they are some first-order perturbation away from being periodic, which suggests other avenues for future work. In general, the prior is a function of the physical process implementing the computation. For example, it is a function of the time-dependent rate matrix in the case of a CTMC. Given this, we might be able to use the envelope theorem (often used in game theory) to calculate how much the prior can change under first order perturbations away from an exactly periodic process. That in turn might allow us to modify <ref> to involve some infinitesimal first-order perturbation parameter ϵ characterizing how much the process differs from being exactly periodic.
However, the results developed in this paper also provide new insights in the field of nonequilibrium thermodynamics. An important consequence of our work is the finding that the auxiliary dynamics introduced in Eq. (<ref>) is suitable to treat processes at stopping times that may have unidirectional transitions and absolute irreversibility, hence making our framework applicable to generic situations where local detail balance is broken. Similar auxiliary dynamics has been invoked in the literature such as so-called "dual", “dual-reversed" or "adjoint" dynamics, in the context of fluctuation theorems, see e.g. <cit.>. In particular, as shown above, if the process admits a well-behaved stationary solution, we can obtain from Σ the so-called non-adiabatic (or excess) entropy production <cit.>. Within such scenario, our work is another brick in the wall or recent progress highlighting the role of non-adiabatic entropy and excess heat in characterizing the efficiency <cit.> and calorimetry <cit.> of active nonequilibrium systems.
We also expect our results to have potential applications outside statistical mechanics and computer science, e.g. in biological physics, for instance within the field of biomolecular computation <cit.>, enzyme kinetics <cit.> and information processing in biology <cit.>. As a minimal model, consider a minimal Michaelis-Menten scheme for enzyme kinetics in which an enzyme E transforms a substrate molecule S into a product molecule P. A typical assumption is that the conversion of the substrate into a product takes place through an irreversible chemical reaction
E+S[k_1^-]k_1^+ ES k_2^+ E+P,
where k_i's here are suitable transition rates. Within this model, the enzyme's state during enzymatic cycles follows a continuous-time Markov jump process with one irreversible transition. In previous works, the presence of the irreversible transition has been circumvented by considering virtual processes with a very slow transition rate. However our formalism can be readily applied to describe the stochastic thermodynamics of such enzymatic reaction, inasmuch that it does not require the presence of bidirectional transitions. Similarly we expect our formalism to be suitable to describe fluctuations of biological populations in processes that include totally irreversible transitions such as cell-fate decisions <cit.> cell death and apoptosis <cit.>, among others. For such systems, our approach puts forward recent approaches to estimate dissipation developed within in the field of active matter <cit.> which did not contemplate the presence of unidirectional transitions which are commonplace in biophysical modelling <cit.>.
We thank Artemy Kolchinsky for useful comments on the first version of the manuscript. GM acknowledges funding from Spanish MICINN through the 'Ramón y Cajal' program (RYC2021-031121-I) and support from the María de Maeztu project CEX2021-001164-M funded by the MCIN/AEI/10.13039/501100011033. ER acknowledges fruitful discussions with Léa Bresque and financial support from PNRR MUR project PE0000023-NQSTI. DHW acknowldges US NSF Grant CHE-1648973. DHW also thanks the Santa Fe Institute for support.
§ APPENDIX
§ TRAJECTORY LEVEL MISMATCH COST
Define [G_t ϱ](y) as the distribution ϱ evolved through the (linear) dynamics G up to time t, and then evaluated at state y ∈𝒴. Using this, we can define a trajectory-level version of mismatch cost (and associated instance-level version), as
m_ϱ_0(Y_[0,τ]) :=
ln[ϱ_0(y_0)/ϱ_min(y_0)] -
ln[Gϱ_0(y_0)/Gϱ_min(y_0)]
= ∑_t=0^τ-1
ln([G_t ϱ_0](y_t)/[G_t ϱ_min](y_t)) - ln([G_t+1 ϱ_0](y_t+1)/[G_t+1ϱ_min](y_t+1)).
By construction the expected value of m_ρ_0(Y_[0,τ]) over all trajectories Y_[0,τ] equals the (ensemble level) mismatch cost, ℳ(ρ_0) given in <ref>. In
the context of inclusive thermodynamics, such definitions allowed the derivation of fluctuation theorems for mismatch cost <cit.>.
§ STRICT POSITIVITY OF THE MISMATCH COST SUM GIVEN BY EQ. (<REF>).
Here we provide a proof of the strict positivity of the mismatch cost sum in Eq. (<ref>), that is ∑_t=0^τ-1ℳ(ρ_t) > 0.
We know that, in general, D( ρ_0 || μ) - D(G ρ_0 || G μ) = 0 if and only if ρ_0 = μ. However, if in fact G is not logically invertible, and yet ρ_0 = μ, then G ρ_0 μ. This means that either D( ρ_0 || μ) - D(G ρ_0 || G μ) = 0 or D(G ρ_0 || μ) - D(G^2 ρ_0 || G μ) = 0 — but not both. Therefore, so long as G is not logically invertible, the sum in (<ref>) is not zero.
§ RESIDUAL COST
Suppose we are given a physical process which implements a single-valued function f over a space X. The islands of f are defined as the elements of the partition of X given by the pre-images of f. Formally, if we write the image of f as f(X), the islands of f are the sets {f^-1(x): x ∈ f(X)}. We write the set of islands of a function f as L(f).
In this appendix we will expand the residual cost ℛ(ϱ_0) arising in <ref> in terms of islands, discuss some properties of the residual cost, and then justify why this cost can be ignored in our investigation in this paper. For simplicity we will restrict attention to the case where the linear term F in <ref> is expected entropy flow generated in a process, so that 𝒞(τ) is the EP generated in the process up to time τ. However, all of the discussion extends to other choices of F, with the obvious modifications.
<ref> provides entropy production as a sum of two terms, the mismatch cost and the residual cost, where ℛ(τ) corresponds to the residual cost.
For any conditional distribution G, and any island c ∈ L(G), we define the prior within that island as <cit.>:
ϱ_min^c ∈ϱ: supp(ϱ) ∈Δ_cmin𝒞(τ)
where the subscript means that ϱ is a distribution whose support is restricted to the unit simplex over the island c, Δ_c, viewed as a subset of the state space 𝒴.
The associated minimum EP in that island is written as
𝒞_min^c(τ):=min _ϱ: supp(ϱ) ∈Δ_c𝒞(τ).
Now introduce an arbitrary distribution over islands, q(c), and define
ϱ_min:=∑_c ∈ L(G) q(c) ϱ_min^c.
Then the residual EP of the physical process that implements G is
ℛ(τ) = ∑_c ∈L(G) p_τ(c) 𝒞_min^c(τ)
as shown in <cit.>. Therefore the EP for the process starting with distribution ϱ_0 is
𝒞(τ)=D(ϱ_0 ϱ_min)-D(G ϱ_0 G ϱ_min) + ∑_c ∈ L(G) p_τ(c) 𝒞_min^c.
Since expected EP 𝒞(τ) is non-negative, by definition the residual cost of any island
c, 𝒞_min^c, is non-negative. So the total residual cost given by an expectation over
all islands, which is the residual cost for the entire thermodynamic process characterized by G, is also non-negative. Like priors, residual costs of islands in general will differ from one cost function 𝒞 to the next.
In general, as the iteration t of a periodic process changes, the distribution p_t(c) over the islands c will change. Therefore so will the associated total expected residual cost. However, since that total residual cost is always non-negative, all the lower bounds on EP in the main text that consider only mismatch cost apply. This is true even if the residual costs of the islands are strictly positive. In particular, <ref> will still be a lower bound on the EP generated in the process.
On the other hand, if we write down the formula for minimal total residual cost which is analogous to <ref>, minimizing over the residual costs of each island, we just get zero,
by taking those costs to all equal zero. So unless we fix the physical details underlying the process, and therefore fix the residual costs of the islands to be strictly positive, our analysis of lower bounds isn't changed by the existence of residual costs. This is why such quantities are ignored in the main text.
As a final comment, note that in general both the prior and residual costs will vary with τ. However, the same mismatch cost formula bounds dissipation for any such choice of τ, once one plugs in the appropriate prior. This need not be true for residual cost, in the sense that the islands might change for different choices of τ.
§ PROOF OF THE SUPERMARTINGALE PROPERTY (<REF>)
Here we provide a detailed proof of the supermartingale property of the process M_τ(t) defined by Eq. (<ref>):
⟨ M_τ(τ) | X_[0,t]⟩≡∑_X_(t,τ]∈ℱ_AC P(X_[0,τ] | X_[0,t]) M_τ(τ)
= ∑_X_[t+1,τ]∈ℱ_ACP̅(Θ X_[0,τ])/P(X_[0,t])
= e^-Σ(t)∑_X_[t+1,τ]∈ℱ_ACP̅ (Θ X_[0,τ])/P̅(Θ X_[0,t])
= e^-Σ(t) - δ_τ(t)∑_X_[t+1,τ]∈ℱ_ACP̅(Θ X_[t, τ])/ρ̅_τ - t(x_t)
= M_τ(t) [1 - ∑_X_[t+1, τ]∈ℱ_AIP̅( Θ X_[t, τ])/ρ̅_τ - t(x_t)]
= M_τ (t) [ 1 - α_τ(t)] ≤ M_τ (t) .
To obtain Eq. (<ref>) we first used the definition of conditional probability in <ref> to derive <ref>. We then multiplied and divided by P̅(Θ X_[0,t]), in order to obtain
the e^-Σ(t) multiplicative factor. After that we expanded the remaining probabilities P̅(Θ X_[0,τ]) in the numerator and P̅(Θ X_[0,t]) in the denominator, and multiplied the resulting expression with ρ_t(x_t)/ρ̅_τ - t(x_t) to obtain the expression
involving M_τ(t) (recall the definition of M_τ(t) in Eq. (<ref>)). Finally, we transformed the sum to involve
ℱ_AI, the complementary filtration of ℱ_AC, and invoked the fact that ∑_X_[t,τ]∈ℱρ_τ(x_τ) P̅(x_τ-1 | x_τ) ... P̅(x_t | x_t+1) = ρ̅_τ - t(x_t) when the sum is taken over the complete set of trajectories ℱ.
§ STOPPING-TIMES FLUCTUATION THEOREM WITH ABSOLUTE IRREVERSIBILITY IN EQ. (<REF>)
In this appendix we give a detailed proof of the main stopping-times fluctuation theorem with absolute irreversibility presented in Eq. (<ref>). For the proof, it is convenient to split the filtration ℱ into subsets of filtrations ℱ^(t) containing all trajectories that are stopped at time t, i.e. for which M_τ(t) = M_τ(𝒯). We take t= 0, 1 ,..., τ, where τ is the maximum allowed time. Notice that we enforce the dynamics to stop at τ if not previously done (however one can later take τ→∞ whenever M_τ(τ) remains bounded). Therefore we have ℱ = ℱ^(1)∪ ... ∪ℱ^(τ). Now, in analogy to the previous section, we define for each (discrete) instant of time t, the sets ℱ_AC^(t) and ℱ_AI^(t) of trajectories that are both stopped at t and which are either allowed [P(X_[0,t]) >0 or P(X_[0,t]) = P(Θ X_[0,t]) = 0] or only forbidden in the original dynamics [P(X_[0,t]) =0 with P̅(Θ X_[0,t])>0], respectively. This implies that ℱ^(t) = ℱ_AC^(t)∪ℱ_AI^(t) at any t = 0, 1 ... τ. Then we have:
⟨ M_τ(𝒯) ⟩≡∑_t=0^τ ∑_X_[0,t]∈ℱ_AC^(t) P(X_[0,t]) M_τ(t)
= ∑_t=0^τ ∑_X_[0,t]∈ℱ_AC^(t) P(X_[0,t]) [⟨ M_τ(τ) | X_[0,t]⟩ + M_τ(t) α_τ(t) ]
= ⟨ M_τ(τ) ⟩ + ∑_t=0^τ ∑_X_[0,t]∈ℱ_AC^(t) P(X_[0,t]) M_τ(t) α_τ(t)
= 1 - γ_τ + ∑_t=0^τ ∑_X_[0,t]∈ℱ_AC^(t) P(X_[0,t]) M_τ(t) α_τ(t)
=1 - ∑_t=0^τ∑_X_[0,t]∈ℱ_AI^(t)P̅(Θ X_[0,t]) = 1 - Γ_τ.
where we used the fluctuation theorem at fixed times with absolute irreversibility, c.f. Eq. (<ref>), and in the last line we defined the correction term for stopping times:
Γ_τ≡∑_t=0^τ∑_X_[0,t]∈ℱ_AI^(t)P̅(Θ X_[0,t]) ρ̅_τ-t(X_t)/ρ_t(X_t)≥ 0
summing up the probabilities for all the (reversed) stopped AI trajectories.
To reach Eq. (<ref>) from Eq. (<ref>) we used :
∑_t=0^τ ∑_X_[0,t]∈ℱ_AC^(t) P(X_[0,t]) M_τ(t) α_τ(t)
=∑_t=0^τ∑_X_[0,t]∈ℱ_AC^(t)P̅(Θ X_[0,t])/ρ_t(X_t)∑_X_(t, τ]∈ℱ_AIP̅(Θ X_[t,τ])
=∑_t=0^τ∑_X_[0,t]∈ℱ_AC^(t)P̅(Θ X_[0,t])/ρ_t(X_t)
[ ρ̅_τ-t(X_t) - ∑_X_(t,τ]∈ℱ_ACP̅(Θ X_(t, τ])]
= ∑_t=0^τ∑_X_[0,t]∈ℱ_AC^(t)P̅(Θ X_[0,t]) ρ̅_τ-t(X_t)/ρ_t(X_t) - ∑_X_[0,τ]∈ℱ_ACP̅(Θ X_[0,τ])
= ∑_t=0^τ∑_X_[0,t]∈ℱ_AC^(t)P̅(Θ X_[0,t])ρ̅_τ-t(X_t)/ρ_t(X_t) - 1 + γ_τ
= - ∑_t=0^τ∑_X_[0,t]∈ℱ_AI^(t)P̅(Θ X_[0,t])ρ̅_τ-t(X_t)/ρ_t(X_t) + γ_τ
= -Γ_τ + γ_τ,
where in the last line we used that ℱ_AC^(t)∪ℱ_AI^(t) = ℱ^(t) and ∑_t=0^τ∑_X_[0,t]∈ℱ^(t)P̅(Θ X_[0,t]) ρ̅_τ-t(X_t)/ρ_t(X_t) = 1, since the trajectories are stopped with certainty in the interval [0,τ] and P̅(Θ X_[0,t]) ρ̅_τ-t(X_t)/ρ_t(X_t) is a normalized path probability.
§ ORDERED STOPPING-TIMES FLUCTUATION RELATION IN EQ. (<REF>)
In this appendix we derive the stopping-times fluctuation relation for two ordered stopping times in Eq. (<ref>) via Doob's optional sampling theorem. Let us start by considering a finite but otherwise generic stopping time of the form in Eq. (<ref>), such that 0 ≤≤τ. Doob's optional sampling theorem for supermartingales reads <cit.> (see also Ref. <cit.>):
⟨ A() | 𝐱_[0,t]⟩≤ A(min{, t}),
where A is a supermartingale over the trajectories 𝐱_[0,τ].
In the following we will apply Eq. (<ref>) to the supermantingale process M_τ(t) = e^-Σ(t) - δ_τ(t) introduced in Eq. (<ref>), for two ordered stopping times _1 and _2 such that P(_2 ≥_1) = 1. In this case we have:
⟨ M_τ(_2) | 𝐱_[0,_1]⟩≤ M_τ(_1),
where we called _2 = and _1 = min{, t}. Notice that above 𝐱_[0,t] can be replaced by 𝐱_[0,_1] since _1 ≤ t by construction and Eq. (<ref>) is valid for any generic t ≤τ. The average over stopping times of M_τ(_1) then verifies:
⟨ M_τ(_1) ⟩ = ∑__1 = 0^τ∑_𝐱_[0,_1] P(𝐱_[0,_1], _1) M_τ(_1)
= ∑__1 = 0^_2∑_𝐱_[0,_1] P(𝐱_[0,_1], _1) M_τ(_1)
+ ∑__1 = _2^τ∑_𝐱_[0,_1] P(𝐱_[0,_1], _1) M_τ(_1)
≥∑__1 = 0^_2∑_𝐱_[0,_1] P(𝐱_[0,_1], _1) ⟨ M_τ(_2) | 𝐱_[0,_1]⟩
= ⟨ M_τ(_2) ⟩,
where we used that _2 ≥_1 and Eq. (<ref>) to reach the inequality. The last line follows from the fact that we are summing the conditional average over all possible trajectories for stopping times _1 between 0 and _2. Finally by replacing in Eq. (<ref>) the explicit expression for M_τ(t) in Eq. (<ref>), we directly recover Eq. (<ref>). The inequality in Eq. (<ref>) is saturated either for _2 = _1 or when the supermartingale M_τ(t) becomes a martingale, that is, in the absence of absolute irreversibility.
§ SCALING OF THERMODYNAMIC AVERAGES WITH LIMIT TIME IN THE DFA EXAMPLE
As pointed in Sec. <ref>b for the minimal DFA in Fig. <ref>a), we observe a tendency for the intrinsic mismatch cost at stopping times ⟨Σ(𝒯)⟩ to saturate when increasing the limit time τ. This is in stark contrast with the scaling behaviour of the fixed-time average ⟨Σ(τ)⟩ with τ.
In this appendix we provide extra numerical evidence for the scaling behavior of mismatch cost at stopping and fixed times, which is shown in Fig. <ref>. There we observe that indeed the average intrinsic mismatch cost at fixed times scales linearly with τ, that is ⟨Σ(τ)⟩∼τ, as illustrated in the inset of Fig. <ref>. Moreover we obtain that ⟨Σ(τ)⟩/τ decreases monotonically with τ up to a saturating positive value, yet its scaling behaviour is rather insensitive to the statistics of the input strings (see open symbols in Fig. <ref>). This point makes us question whether the fixed-time average ⟨Σ(τ)⟩, or its rate per iteration ⟨Σ(τ)⟩/τ, is a suitable indicator of the thermodynamic costs of the computation. On the other hand, when considering the stopping-times average ⟨Σ(𝒯)⟩, we observe a sublinear scaling at moderate values of the limit time, i.e. ⟨Σ(𝒯)⟩∼τ^α, with |α| <1, reaching a plateau at large τ (see filled symbols in Fig. <ref>).
We also find that ⟨Σ(𝒯)⟩ is more sensitive than ⟨Σ(τ)⟩/τ to the value of p_0 for all values of τ explored in our simulations. This reveals that the intrinsic mismatch cost at stopping times ⟨Σ(𝒯)⟩ is a suitable quantity to quantify the average performance of a computation accomplished at a stochastic time. For example, the sensitivity of ⟨Σ(𝒯)⟩ to string statistics could be fruitfully exploited as a probe of the performance of a DFA in processing different regular languages, an exciting avenue that we leave for future work.
|
http://arxiv.org/abs/2307.04871v1 | 20230710194152 | LSEMINK: A Modified Newton-Krylov Method for Log-Sum-Exp Minimization | [
"Kelvin Kan",
"James G. Nagy",
"Lars Ruthotto"
] | math.OC | [
"math.OC"
] |
The Synthesis Lab: Empowering Collaborative Learning in Higher Education through Knowledge Synthesis
Bodong Chen
August 12, 2023
====================================================================================================
[2]Department of Mathematics, Emory University, USA ([email protected])
[3]Departments of Mathematics and Computer Science, Emory University, USA ([email protected], [email protected])
This paper introduces LSEMINK, an effective modified Newton-Krylov algorithm geared toward minimizing the log-sum-exp function for a linear model.
Problems of this kind arise commonly, for example, in geometric programming and multinomial logistic regression.
Although the log-sum-exp function is smooth and convex, standard line search Newton-type methods can become inefficient because the quadratic approximation of the objective function can be unbounded from below.
To circumvent this, LSEMINK modifies the Hessian by adding a shift in the row space of the linear model. We show that the shift renders the quadratic approximation to be bounded from below and that the overall scheme converges to a global minimizer under mild assumptions.
Our convergence proof also shows that all iterates are in the row space of the linear model, which can be attractive when the model parameters do not have an intuitive meaning, as is common in machine learning.
Since LSEMINK uses a Krylov subspace method to compute the search direction, it only requires matrix-vector products with the linear model, which is critical for large-scale problems.
Our numerical experiments on image classification and geometric programming illustrate that LSEMINK considerably reduces the time-to-solution and increases the scalability compared to geometric programming and natural gradient descent approaches. It has significantly faster initial convergence than standard Newton-Krylov methods, which is particularly attractive in applications like machine learning. In addition, LSEMINK is more robust to ill-conditioning arising from the nonsmoothness of the problem. We share our MATLAB implementation at <https://github.com/KelvinKan/LSEMINK>.
log-sum-exp minimization, Newton-Krylov method, modified Newton method, machine learning, geometric programming
65K10
§ INTRODUCTION
We consider minimization problems of the form
min_∈^n f() = ∑_k=1^N w^(k)[ g^(k)() - ^(k)^⊤^(k)],
where
g^(k)():=log( 1_m^⊤exp(^(k) + ^(k)) )
is the log-sum-exp function for a linear model defined by ^(k)∈ℝ^m × n and ^(k)∈ℝ^m, ^(k)∈ℝ^m, 1_m ∈ℝ^m is a vector of all ones, w^(k)'s are weights, and N is the number of linear models.
Problem (<ref>) arises commonly in machine learning and optimization. For example, multinomial logistic regression (MLR) in classification problems <cit.> is formulated as (<ref>). In geometric programming <cit.>, a non-convex problem can be convexified through a reformulation to the form (<ref>). The log-sum-exp function itself also has extensive applications in machine learning. For instance, it can serve as a smooth approximation to the element-wise maximum function <cit.>, where smoothness is desirable in model design since gradient-based optimizers are commonly used. Moreover, the log-sum-exp function is closely related to widely used softmax and entropy functions. For instance, the dual to an entropy maximization problem is a log-sum-exp minimization problem <cit.>, and the gradient of the log-sum-exp function is the softmax function <cit.>.
Despite the smoothness and convexity of the log-sum-exp function, a standard implementation of line search Newton-type methods can be problematic. To realize this, note that the gradient and Hessian of the log-sum-exp function are given by
∇ f() = ∑_k=1^N w^(k)^(k)^⊤ (^(k) - ^(k)), and ∇^2 f() = ∑_k=1^N w^(k)^(k)^⊤^(k)^(k),
with ^(k) = exp(^(k) + ^(k))/ 1_m^⊤exp(^(k) + ^(k)), and ^(k) = diag(^(k)) - ^(k)^(k)^⊤.
The Hessian is positive semi-definite and rank-deficient because the null space of the ^(k)'s contains 1_m. Even more problematic is that when ^(k)'s are close to a standard basis vector (which, for example, commonly occurs in MLR), the Hessian is close to the zero matrix even when the gradient is non-zero. In Newton's method, this means that the local quadratic approximation can be unbounded from below. To be precise, it is unbounded from below if and only if the gradient is not in the column space of the Hessian <cit.>.
Disciplined convex programming (DCP) packages (e.g., CVX <cit.>) can reliably solve the log-sum-exp minimization problem through a reformulation. For instance, CVX first formulates the problem using exponential cones <cit.> and applies backend solvers to solve the resulting problem directly (e.g., MOSEK <cit.>) or through successive polynomial approximation (e.g., SPDT3 <cit.> and SeDuMi <cit.>). However, this approach can be computationally demanding as the number of conic constraints scales with the product of the number of rows in the linear models and the number of linear models. For instance, CVX did not complete the image classification experiments for the whole dataset in <Ref> on a standard laptop in thirty minutes, while LSEMINK finishes on the same hardware in thirty seconds. Furthermore, the formulation relies on access to the elements of the ^(k)'s; i.e., this approach is not applicable in a matrix-free setting where ^(k)'s are not built explicitly, and only routines for performing matrix-vector products are provided.
Tikhonov regularization <cit.>, which adds α/2_2^2 with α>0 to the objective function, avoids the cost of reformulation and alleviates the convergence issues with Newton-type methods.
The regularization shifts the Hessian by α and renders it positive definite, where is the identity matrix. Nonetheless, Tikhonov regularization introduces a bias and consequently changes the optimal solution. The regularization parameter α has to be chosen judiciously – a large α renders the problem easier to solve and produces a more regular solution but introduces more bias. In addition, one cannot use effective parameter selection algorithms <cit.> for linear problems due to the nonlinearity of the log-sum-exp function. On the other hand, first-order methods like gradient descent <cit.>, or AdaGrad <cit.>, which do not use the Hessian matrix, can avoid the problem. However, their convergence is inferior to methods that utilize curvature information <cit.>.
Modified Newton-type methods effectively tackle problems with rank-deficient or indefinite Hessians and do not introduce bias. The idea is to add a shift to the Hessian so that at the ith iteration, the scheme solves
min_1/2 ( - _i)^⊤ (∇^2 f(_i) + β_i _i) ( - _i) + ∇ f (_i)^⊤ ( - _i),
where β_i is a parameter, and the shift _i renders the Hessian to be sufficiently positive definite. The quadratic approximation is bounded from below since the modified Hessian is positive definite. Hence the convergence issues are avoided. The effect of the Hessian shift is reminiscent of the Tikhonov regularization approach. Indeed, the scheme is sometimes called a Tikhonov-regularized Newton update <cit.>. However, the key conceptual difference between (<ref>) and Tikhonov regularization is that the former does not introduce any bias to the problem <cit.>, i.e., the optimal solution to the problem is independent of β_i's. There are different ways of defining _i. For instance, _i is spanned by some of the eigenvectors of the Hessian <cit.>, or is a modification to the factorization of the Hessian <cit.>. However, the computations needed for these approaches are intractable for large-scale problems commonly arising in machine learning. A simple and computationally feasible approach is to set _i as the identity matrix <cit.>, which will be used as a comparing method in our numerical experiments.
In this paper, we propose LSEMINK, a novel modified Newton-Krylov method that circumvents the drawbacks outlined above. The main novelty in our method is the Hessian shift _i=∑_k=1^N w^(k)^(k)^⊤^(k). This generates an update in the row space of the linear model, as compared to the aforementioned modified Newton-type methods, which returns an update in the parameter space of the linear model (i.e., the space). This property is preferable in machine learning applications since model parameters often do not have an intuitive meaning, while the row space of the linear model contains interpretable data features. Note that standard convergence guarantees (e.g., <cit.>), which often require positive definiteness of the modified Hessian, do not apply to our method since our modified Hessian can be rank-deficient. We show that the quadratic approximation is bounded from below, and the overall scheme provably converges to a global minimum. Since a Krylov subspace method is applied to approximately solve (<ref>) to obtain the next iterate, LSEMINK is suitable for large-scale problems where the linear models are expensive to build and are only available through matrix-vector multiplications. Our numerical experiments on image classification and geometric programming illustrate that LSEMINK considerably reduces the time-to-solution and increases the scalability compared to DCP and natural gradient descent and has significantly faster initial convergence than standard Newton-Krylov methods.
This paper is organized as follows. In <Ref>, we describe the proposed LSEMINK. In <Ref>, we provide a global convergence guarantee. In <Ref>, we demonstrate the effectiveness of LSEMINK using two numerical experiments motivated by geometric programming and image classification, respectively. We finally conclude the paper in <Ref>.
§ LSEMINK
We propose LSEMINK, a modified Newton-Krylov method geared toward log-sum-exp minimization problems of the form (<ref>).
At the ith iteration, we first consider the quadratic approximation (<ref>) with _i=∑_k=1^N w^(k)^(k)^⊤^(k). That is,
min_ q_i() = 1/2 ( - _i)^⊤(∇^2 f(_i) + β_i ∑_k=1^N w^(k)^(k)^⊤^(k)) ( - _i) + ∇ f (_i)^⊤ ( - _i)
= 1/2
( - _i)^⊤[ ∑_k=1^N w^(k)( ^(k)^⊤ (^(k)_i + β_i ) ^(k)) ] ( - _i) + ∇ f (_i)^⊤ ( - _i),
whose minimizer is given by _i + Δ_i, where Δ_i solves the Newton equation
∇^2 q_i(_i) Δ_i = - ∇ q_i (_i),
and ^(k)_i is ^(k) evaluated at _i. It is important to note that the Hessian shift in (<ref>) is different from the typical modified Newton approaches (e.g. eigenvalue modification <cit.>, identity matrix <cit.>, or modification to the factorization of the Hessian <cit.>) which seek to obtain a positive definite Hessian and lead to an update in the parameter space of the linear model (i.e. the space).
Instead, it generates an update direction in the row space of the linear models. This is preferable especially in machine learning applications because model parameters often do not have intuitive meaning while the row space of the linear models contains data features and is explicable. Although the Hessian of (<ref>) is rank-deficient especially when the linear models are over-parametrized (i.e. ^(k)'s have more columns than rows), it is positive definite in the row space of the linear model. Consequently, the quadratic approximation is bounded from below, and the overall scheme provably converges to a global minimum; see <Ref> for a detailed derivation.
An alternative formulation for (<ref>) is
min_1/2 ( - _i)^⊤∇^2 f(_i) ( - _i) + ∇ f (_i)^⊤ ( - _i) + β_i/2∑_k=1^N w^(k)^(k) ( - _i)_2^2,
which can be interpreted as a Newton scheme with a proximal term acting on the row space of ^(k)'s. This formulation shows that β_i controls the step size in a nonlinear line search arc. To be precise, β_i=0 and ∞ correspond to a Newton update with step size 1 and 0, respectively, and the update is given nonlinearly for 0<β_i<∞. The formulation also shows that our proposed scheme bears similarity to L^2 natural gradient descent (NGD) methods <cit.> which use the same proximal term. Nonetheless, unlike our approach, L^2 NGD methods generally do not directly incorporate Hessian information into its search direction and approximate curvature information using only the linear model.
The crucial difference between the proximal term and Tikhonov regularization is that the former does not introduce any bias <cit.>; i.e., the optimal solution is independent of β_i. Another advantage is that Tikhonov regularization requires parameter tuning, which is commonly done using a grid search for nonlinear problems like (<ref>), while in our proposed method β_i's are automatically selected by a backtracking Armijo line search scheme. The proposed scheme can also be perceived as a proximal point algorithm acting on the second-order approximation <cit.>.
We compute the update direction Δ_i by approximately solving the Newton equation (<ref>) using a Krylov subspace method (e.g., conjugate gradient method <cit.>) and obtain the next iterate _i+1 = _i + Δ_i. In particular, the Krylov subspace is given by
𝒦_r(∇^2 q_i(_i), ∇ q_i (_i))
= 𝒦_r (∑_k=1^N w^(k)( ^(k)^⊤ (^(k)_i + β_i ) ^(k)), ∑_k=1^N w^(k)^(k)^⊤ (_i^(k) - ^(k)) ),
where r is the dimension of the Krylov subspace and _i^(k) is ^(k) evaluated at _i.
Since the Krylov subspace method only requires routines to perform Hessian-vector multiplications, LSEMINK is applicable to large-scale problems commonly arising in machine learning applications where the linear models are only available through matrix-vector products.
An outline of the implementation of LSEMINK is presented in <Ref>.
LSEMINK has significantly faster initial convergence compared with standard Newton-Krylov solvers. This is particularly attractive in applications that do not require high accuracy, e.g., image classification. LSEMINK also considerably reduces the time-to-solution and has better scalability compared to geometric programming and natural gradient descent approaches. It avoids the respective drawbacks of the solvers outlined in <Ref>. Moreover, it is more robust to ill-conditioning arising from the nonsmoothness of the problem; see <Ref> for numerical experiments. We provide a MATLAB implementation at <https://github.com/KelvinKan/LSEMINK>. The implementation is easy to experiment with, as it only requires minimal knowledge and input from the user.
§ PROOF OF GLOBAL CONVERGENCE
In this section, we prove the global convergence of the proposed LSEMINK. It is noteworthy that existing convergence results cannot be directly applied due to the rank-deficiency of our modified Hessian. For instance, it is assumed in <cit.> that the modified Hessian is positive definite and has a bounded condition number. Our proof is modified from the approach in <cit.>, which studies proximal Newton-type methods for composite functions.
We first state the main theorem.
Assume that f is defined in (<ref>),
and inf_ f() is attained in ℝ, then the sequence {_i }_i generated by LSEMINK converges to a global minimum regardless of the choice of initial guess _0.
We note that <Ref> also applies to the case where the Newton equation (<ref>) is solved exactly. In the following, we will first discuss some properties of LSEMINK. We will then state and prove four lemmas which will aid the proof of <Ref>.
For simplicity of exposition and without loss of generality, in this section, we drop the superscript and focus on the case with only one linear model defined by , , and , and the weight w=1. In this case, the Krylov subspace in (<ref>) becomes
𝒦_r(∇^2 q_i(_i), ∇ q_i (_i)) = 𝒦_r(^⊤ (_i + β_i ) , ^⊤ (_i - )).
We note that our proof can be straightforwardly extended to the general case by setting
= [^(1); ...; ^(N)], = [w^(1)^(1); ...; w^(N)^(N)],
_i = [w^(1)_i^(1); ...; w^(N)_i^(N)], and _i = blkdiag(w^(1)_i^(1), ..., w^(N)_i^(N)),
where blkdiag denotes a block diagonal matrix.
Recall that the Krylov subspace in (<ref>) is constructed to approximately solve the Newton equation and obtain the update direction Δ_i. This is equivalent to building a rank-r approximation ∇^2 q_i(_i) ≈_i _i _i^⊤ and computing the next iterate by
_i+1 = _1/2 ( - _i)^⊤_i _i _i^⊤ ( - _i) + ∇ f (_i)^⊤ ( - _i).
Here, the columns of _i ∈ℝ^n × r form an orthonormal basis for the Krylov subspace and _i ∈ℝ^r × r. Since ∇ f(_i) ∈ row() = col(^⊤ (_i + β_i ) ) for β_i>0 and the Krylov subspace always contains ∇ f(_i), the column space of _i _i _i^⊤ always contains ∇ f(_i). This means that the quadratic function (<ref>) is bounded from below <cit.> and admits a minimum. The iterate _i+1 is the minimum norm solution to (<ref>) given by
_i+1 = _i + Δ_i, where Δ_i = - _i _i^-1_i^⊤∇ f(_i).
Next, we state and prove some lemmas which will be used to prove the main theorem.
The update Δ_i generated by the iterative scheme (<ref>) satisfies
Δ_i ∈ row () ,
Δ_i^⊤∇^2 q_i(_i) Δ_i = Δ_i^⊤_i_i_i^⊤Δ_i.
Here, (<ref>) means that the update direction is in the row space of the linear model.
By construction, the Krylov subspace (<ref>) is a subspace of row (), and by (<ref>) we have Δ_i∈ col (_i). Thus we have Δ_i∈col(_i) ⊆row(), which proves (<ref>).
Consider the full representation of the Hessian of (<ref>) generated by the Krylov subspace method
∇^2 q_i(_i) = ^⊤ (_i + β_i) = [ _i _i ][ _i _1; _2 _3 ][ _i^⊤; _i^⊤ ],
where col(_i) ⊥ col(_i). We have
Δ_i^⊤∇^2 q_i(_i) Δ_i = Δ_i^⊤[ _i _i ][ _i _1; _2 _3 ][ _i^⊤; _i^⊤ ]Δ_i
= [ Δ_i^⊤_i 0 ][ _i _1; _2 _3 ][ _i^⊤Δ_i; 0 ], as Δ_i∈ col(_i),
=
Δ_i^⊤_i_i_i^⊤Δ_i,
which proves (<ref>).
The update Δ_i generated by (<ref>) satisfies the descent condition
∇ f(_i)^⊤Δ_i≤ - Δ_i^⊤^⊤ (_i + β_i) Δ_i.
Since _i+1 is a solution to (<ref>), for any t ∈ (0,1), we have
1/2Δ_i^⊤_i_i_i^⊤Δ_i + ∇ f(_i)^⊤Δ_i≤1/2 (tΔ_i)^⊤_i_i_i^⊤ (t Δ_i) + ∇ f(_i)^⊤ (t Δ_i).
By rearranging the terms, we have
(1-t^2)/2Δ_i^⊤_i_i_i^⊤Δ_i + (1-t) ∇ f(_i)^⊤Δ_i ≤ 0
(1+t)/2Δ_i^⊤_i_i_i^⊤Δ_i + ∇ f(_i)^⊤Δ_i ≤ 0
∇ f(_i)^⊤Δ_i ≤ - (1+t)/2Δ_i^⊤_i_i_i^⊤Δ_i.
Letting t → 1^-, we obtain
∇ f(_i)^⊤Δ_i≤ - Δ_i^⊤_i_i_i^⊤Δ_i.
Combining (<ref>) and (<ref>), we obtain (<ref>).
In the following lemma, we will make use of the fact that ∇ f is Lipschitz continuous. This is because the gradient of the log-sum-exp function is the softmax function, which is Lipschitz continuous <cit.>.
Let λ_ min be the smallest nonzero eigenvalue of ^⊤, and L be the Lipschitz constant for ∇ f. For line search parameter γ∈ (0,1) and
β_i ≥L/2λ_ min(1-γ),
the following Armijo line search condition holds
f(_i+1) ≤ f(_i) + γ∇ f(_i)^⊤ (_i+1-_i).
First, note that
(_i+1 - _i)^2__i + β_i ≥β_i (_i+1 - _i) _2^2 ≥β_i λ_ min (_i+1 - _i) _2^2.
Here, in the second step we used that (_i+1 - _i) ∈row() = row(^⊤) (Lemma <ref>), row(^⊤)^⊥ = null(^⊤), and λ_ min is the smallest nonzero eigenvalue of ^⊤.
Next, we have
f(_i+1) ≤ f(_i) + ∇ f(_i)^⊤ (_i+1 - _i) + L/2_i+1 - _i _2^2
≤ f(_i) + ∇ f(_i)^⊤ (_i+1 - _i) + β_i λ_ min (1-γ)_i+1 - _i _2^2
≤ f(_i) + ∇ f(_i)^⊤ (_i+1 - _i) + (1 - γ) (_i+1 - _i)^2__i + β_i
≤ f(_i) + ∇ f(_i)^⊤ (_i+1 - _i) - (1 - γ) ∇ f(_i)^⊤ (_i+1 - _i )
= f(_i) + γ∇ f(_i)^⊤ (_i+1 - _i).
Here, the first, second, thrid, and fourth steps use the Lipschitz continuity of ∇ f, (<ref>), (<ref>), and Lemma <ref>, respectively.
The iterative scheme (<ref>) generates a fixed point _* if and only if _* is a stationary point.
"⇐": Substituting ∇ f(_*) = 0 into (<ref>), we obtain Δ_*=0. Hence _* is a fixed point.
"⇒": Let = - _* for any . Since _* is a fixed point to (<ref>), we have, for any t ∈ℝ,
1/2 (t)^⊤_* _* _*^⊤ (t ) + ∇ f(_*)^⊤ (t )
≥1/2 (_*-_*)^⊤_* _* _*^⊤ (_*-_*) + ∇ f(_*)^⊤ (_*-_*).
Simplifying this, we obtain
t^2/2^⊤_* _* _*^⊤ + t ∇ f(_*)^⊤ ≥ 0
∇ f(_*)^⊤ ≥ - t/2^⊤_* _* _*^⊤.
Taking t → 0, we obtain ∇ f(_*)^⊤≥ 0 for any .
This implies ∇ f(_*) is a zero vector, that is, _* is a stationary point.
Now, we are ready to prove the main theorem.
The sequence { f(_i) }_i is decreasing because the update directions are descent directions (<Ref>) and the Armijo line search scheme guarantees sufficient descent at each step (<Ref>). By the continuity of f, it is closed <cit.>. Since f is closed and attains its infimum in ℝ,
the decreasing sequence { f(_i) }_i converges to a limit.
By the sufficient descent condition (<ref>), the convergence of { f(_i) }_i and α >0,
∇ f(_i)^⊤ (_i+1 - _i)
converges to zero. Hence, by (<ref>),
Δ_i^⊤^⊤ (_i + β_i ) Δ_i
converges to zero. Since (_i + β_i ) is positive definite and Δ_i ∈row() (<Ref>), Δ_i converges to the zero vector.
This implies that _i converges to a fixed point of (<ref>). By <Ref>, _i converges to a stationary point. By the convexity of f, _i converges to a global minimum.
§ NUMERICAL EXPERIMENTS
We perform two numerical experiments for minimizing the log-sum-exp function for a linear model. We compare the performance of the proposed LSEMINK with three commonly applied line search iterative methods and three disciplined convex programming (DCP) solvers; see <Ref>. In <Ref>, we consider multinomial logistic regression (MLR) arising in image classification. In <Ref>, we experiment with a log-sum-exp minimization problem arising in geometric programming.
The experimental results show that LSEMINK has much better initial convergence, is more robust and scalable compared with the comparing methods.
§.§ Benchmark Methods
We compare the proposed LSEMINK with three common line search iterative schemes and three DCP solvers for machine learning and geometric programming applications. Firstly, we implement a standard Newton-CG (NCG) algorithm with a backtracking Armijo line search. Secondly, we compare with an L^2 natural gradient descent (NGD) method <cit.> that approximately solves
min_1/2∇ f (_i)^⊤ ( - _i) + λ_i/2∑_k=1^N w^(k)^(k) ( - _i)_2^2,
using CG to obtain the next iterate, where λ_i controls the step size and is determined by a backtracking Armijo line search scheme, and the last term is a proximal term acting on the row space of the linear model. This scheme bears similarity to LSEMINK as the proximal term has the same effect as the shift in Hessian of LSEMINK. However, it does not make use of the Hessian and only approximates curvature information using the linear model. Thirdly, to demonstrate the effectiveness of the Hessian modification in LSEMINK, we compare with a standard modified Newton-Krylov (SMNK) scheme, which approximately solves (<ref>) with _i= using Lanczos tridiagonalization, which has the same iterates as CG up to rounding errors but allows computations for the update direction to be re-used during line search. For LSEMINK, the Newton equation (<ref>) is approximately solved by CG. We note that an update direction has to be re-computed for each attempted value of β_i during line search. In other words, unlike SMNK, the update direction computation cannot be re-used. However, our experimental results show that LSEMINK is still efficient in terms of computational cost thanks to the effectiveness of the modified Hessian. In each experiment, we use the same maximum number of iterations and tolerance for the CG and Lanczos schemes across different line search iterative methods.
In addition, we apply CVX <cit.>, a DCP package, paired with three different backend solvers (SPDT3 <cit.>, SeDuMi <cit.>, and MOSEK <cit.>). The best precision for CVX is used in the experiments; see <cit.> for detailed information.
Cost Measurement We measure the computational costs for different line search iterative methods in terms of work units. In particular, a work unit represents a matrix-vector product with the linear models or their transpose. This is because these computations are usually the most expensive steps during optimization. For instance, in the MLR experiments of <Ref>, the linear models ^(k)'s contain the propagated high dimensional features of all the training data. Note that the number of work units in one iteration can differ across different line search iterative methods since a different number of CG/Lanczos iterations or line search updates can be performed. In addition to work unit, we also compare computational costs for all methods in total runtime.
§.§ Experiment 1: Image Classification
Perhaps the most prominent example of log-sum-exp minimization is multinomial logistic regression (MLR) arising in supervised classification. Here, we experiment on an MLR problem for the classification of MNIST <cit.> and CIFAR-10 <cit.> image datasets. The MNIST dataset consists of 60,000 28 × 28 hand-written images for digits from 0 to 9. The CIFAR-10 consists of 60,000 32 × 32 color images equally distributed for the following ten classes: airplane, automobile,
bird, cat, deer, dog, frog, horse, ship, and truck. Example images for the two datasets are shown in <Ref> and <Ref>, respectively.
Problem Description
Let n_f be the number of features, n_c be the number of classes, and Δ_n_c be the n_c-dimensional unit simplex. Denote a set of data by {^(k), ^(k)}_k=1^N ⊂ℝ^n_f×Δ_n_c, where ^(k) and ^(k) are the input feature and target output label, respectively. In our experiments, we consider two feature extractors that enhance the features ^(k) by propagating it into a higher dimensional space ℝ^n_p. The first feature extractor is the random feature model (RFM) <cit.>. It applies a nonlinear transformation given by
_ RFM(^(k)) = σ(^(k) + ),
where σ is the element-wise ReLU activation function, ∈ℝ^n_p × n_f and ∈ℝ^n_p are randomly generated. The second feature extractor is performed by propagating the features through the hidden layers of a pre-trained AlexNet <cit.>. In particular, the AlexNet was pre-trained on the ImageNet dataset <cit.>, which is similar to the CIFAR-10 dataset, using MATLAB's deep neural networks toolbox. This procedure is also known as transfer learning. These feature extractors can empirically enhance the generalization of the model, i.e., the ability to classify unseen data correctly.
The goal of the supervised classification problem is to train a softmax classifier
s(, (^(k))) = exp((^(k)))/ 1_n_c 1_n_c^⊤exp((^(k)))
such that s(, (^(k))) ≈^(k). Here are model parameters, the exp and division are applied element-wise, 1_n_c is an n_c-dimensional vector of all ones, and : ℝ^n_f→ℝ^n_p is a feature extractor.
To this end, we first consider the sample average approximation (SAA) <cit.> of an MLR problem formulated as
min_∈ℝ^n_c × n_p F() = - 1/N∑_k=1^N ^(k)^⊤log( s(, (^(k))) )
= 1/N∑_k=1^N[ (^(k)^⊤ 1_n_c) log( 1_n_c^⊤exp((^(k))) ) - ^(k)^⊤(^(k)) ]
= 1/N∑_k=1^N[ log( 1_n_c^⊤exp((^(k))) ) - ^(k)^⊤(^(k)) ],
where the log operation is applied element-wise, and we use the fact that ^(k)^⊤ 1_n_c=1 since ^(k)∈Δ_n_c.
The feature extractor is assumed to be fixed since the focus is on the log-sum-exp minimization problem.
We vectorize the variable = vec () so that the MLR problem becomes
min_∈ℝ^n_cn_p f() =1/N∑_k=1^N[ log( 1_n_c^⊤exp(^(k)) ) - ^(k)^⊤^(k)],
which is of the form of (<ref>) and where ^(k) = (^(k))^⊤⊗_n_c.
Experimental Results In the MLR experiments, the line search iterative solvers stop when the norm of gradient is below 10^-14 or after 3,000 work units. We stop the CG and Lanczos scheme when the norm of the relative residual drops below 10^-3 or after 20 iterations.
We first perform a small-scale experiment in which only N=100 training data is used, and a random feature model with dimension m=1,000 is applied. Since under this setup the data can be fit perfectly to achieve a zero training error, the model predictions (<ref>) are close to standard basis vectors near an optimum. In this situation, the Hessian is close to a zero matrix, and the robustness of the solvers can be tested. The results are reported in <Ref> and <Ref>. In <Ref>, one of the results for the standard Newton-CG scheme is not shown, as it fails to converge near the end. This is because the Hessian vanishes and consequently, the second-order approximation is unbounded from below. The natural gradient descent method has the slowest convergence and has yet to converge at the end. Both the standard modified Newton-Krylov method and LSEMINK achieve the stopping criteria under the specified work units. In particular, LSEMINK has superior convergence where the objective function value is up to five orders of magnitude smaller than the second-best method during optimization. LSEMINK also has the fastest time-to-solution. This demonstrates the effectiveness of LSEMINK and the efficacy of its modified Hessian over the standard one. SeDuMi, particularly SDPT3, can achieve very accurate results, but their runtime is about 15 times more than the LSEMINK. MOSEK fails to obtain a solution.
We then experiment with n=50,000 training data and 10,000 validation data. For the MNIST dataset, we use an RFM to propagate the features to an m=1,000-dimensional space. For the CIFAR-10 dataset, features with dimension m=9,216 are extracted from the pool5 layer of a pre-trained AlexNet. Here different feature extractors are used for the two datasets because a better validation accuracy can be achieved. In <Ref>, the results for an MLR problem are illustrated. In <Ref>, we report the performance for an MLR problem with a Tikhonov regularization term α/2_2^2, where α = 10^-3.
Using our state-of-the-art laptop, the CVX solvers cannot complete the experiments within thirty minutes, while the line search methods finish in thirty seconds.
Hence, we focus on the latter methods in this test. The figures show that the L^2 natural gradient descent method is the slowest. The standard Newton-CG and standard modified Newton-Krylov have good convergence results on one dataset but not the other. In contrast, LSEMINK is very competitive on both datasets. Specifically, it has good initial convergence where the objective function value is up to an order of magnitude smaller than the second-best scheme in the first few iterations. Moreover, its
results are comparable with the other methods in terms of final training error, training accuracy, validation accuracy, and norm of gradient.
§.§ Experiment 2: Geometric Programming
We consider a log-sum-exp minimization problem which commonly arises in geometric programming <cit.> and is used to test optimization algorithms <cit.>. In particular, it is formulated as
min_ηlog( 1_m^⊤exp(( + )/η) ),
where ∈ℝ^n, ∈ℝ^m × n, and η controls the smoothness of the problem.
In particular, when η→ 0 the objective function converges to the point-wise maximum function max ( + ) and its Hessian vanishes.
We follow the experimental setups in <cit.>, which use m=100, n=20, and generate the entries of and randomly. We perform the experiments with small values of η to test the robustness of the methods. In particular, we test with η=10^-5, 10^-3, and 10^-1, respectively. We stop the line search iterative schemes after 10,000 work units. The CG and Lanczos schemes stop when the relative residual drops below 10^-3 or after 20 iterations.
The experimental results are shown in <Ref> and <Ref>. We see that the experiments are very challenging as the standard Newton-CG and all the CVX solvers cannot return a solution in some or all the experiments. In particular, the standard Newton-CG breaks in the first iteration in two of the experiments. This is because the quadratic approximation is unbounded from below. Both SeDuMi and SDPT3 fail in some of the experiments. MOSEK fails in all the experiments. When the CVX solvers succeed in returning a solution, they have significantly longer runtime (up to 60 times slower) compared to the line search methods. Similar to the previous experiments, L^2 natural gradient descent method has the slowest convergence and has yet to converge after the specified work units. The standard modified Newton-Krylov and LSEMINK are robust in the experiments and can return accurate solutions for η=10^-3 and 10^-1. This indicates the effectiveness of Hessian modification in handling challenging optimization problems. Moreover, LSEMINK converges faster than the comparing standard modified Newton-Krylov method in the early stage. This indicates the effectiveness of the proposed Hessian modification over the standard one. However, we see that when η=10^-3 and 10^-5, LSEMINK and all comparing methods cannot return a solution with the desired norm of gradient. This is because for a small η, the objective function is close to being nonsmooth. In contrast, the convergence of gradient based methods like LSEMINK requires the differentiability of the objective function.
§ CONCLUSION
We present LSEMINK, a modified Newton-Krylov algorithm tailored for optimizing the log-sum-exp function for a linear model. The novelty of our approach is incorporating a Hessian shift in the row space of the linear model. This does not change the minimizers and renders the quadratic approximation to be bounded from below and the overall scheme to provably converge to a global minimum under standard assumptions. Since the update direction is computed using Krylov subspace methods which only require matrix-vector products with the linear model, LSEMINK is applicable to large-scale problems. Numerical experiments on image classification and geometric programming illustrate that LSEMINK has significantly faster initial convergence than standard Newton-Krylov methods, which is particularly attractive in applications like machine learning, and considerably reduces the time-to-solution and is more scalable compared to DCP solvers and natural gradient descent. Also, LSEMINK is more robust to ill-conditioning arising from the nonsmoothness of the problem. We provide a MATLAB implementation at <https://github.com/KelvinKan/LSEMINK>.
§ ACKNOWLEDGEMENTS
This work was supported in part by NSF awards DMS 1751636, DMS 2038118, AFOSR grant FA9550-20-1-0372, and US DOE Office of Advanced Scientific Computing Research Field Work Proposal 20-023231. The authors would like to thank Samy Wu Fung for sharing the code for propagating the features of the CIFAR-10 dataset with AlexNet.
abbrv
|
http://arxiv.org/abs/2307.04119v1 | 20230709082201 | Categorical Realizability for Non-symmetric Closed Structures | [
"Haruka Tomita"
] | cs.LO | [
"cs.LO"
] |
Categorical Realizability for Non-symmetric Closed Structures]Categorical Realizability for
Non-symmetric Closed Structures
H. Tomita]Haruka Tomita
In categorical realizability, it is common to construct categories of assemblies and categories of modest sets from applicative structures.
These categories have structures corresponding to the structures of applicative structures. In the literature, classes of applicative structures inducing categorical structures such as Cartesian closed categories and symmetric monoidal closed categories have been widely studied.
In this paper, we expand these correspondences between categories with structure and applicative structures by identifying the classes of applicative structures giving rise to closed multicategories, closed categories, monoidal bi-closed categories as well as (non-symmetric) monoidal closed categories. These applicative structures are planar in that they correspond to appropriate planar lambda calculi by combinatory completeness.
These new correspondences are tight: we show that, when a category of assemblies has one of the structures listed above, the based applicative structure is in the corresponding class.
In addition, we introduce planar linear combinatory algebras by adopting linear combinatory algebras of Abramsky, Hagjverdi and Scott to our planar setting, that give rise to categorical models of the linear exponential modality and the exchange modality on the non-symmetric multiplicative intuitionistic linear logic.
[
[
August 12, 2023
===================
§ INTRODUCTION
Realizability started with <cit.> to give interpretations for Heyting arithmetic, and subsequently has been developed in many directions.
The categorical realizability we call here is one such development, giving categorical models of various programming languages and logics.
Given a very simple algebraic structure called applicative structure (or often called combinatory algebra), we construct categories and used as categorical models.
For an applicative structure , the category of assemblies is the category of “-computable universe" and its categorical structure depends on the computational structure of .
Therefore, giving certain conditions, we obtain with corresponding categorical structures.
The best known is that the condition of being a partial combinatory algebra (PCA) leads that (and ) is a Cartesian closed category (CCC) <cit.>.
A PCA is an applicative structure containing two special elements and which expresses substitution and discarding.
(We often call -combinator or -combinator as such elements.)
PCAs also can be characterized by the combinatory completeness, that is, the property that any computable functions (i.e., functions expressed as untyped lambda terms) on a PCA can be represented by elements of the PCA itself.
Categorical realizability for linear structures is also well investigated.
Assuming is a -algebra, that have combinators , and , and become symmetric monoidal closed categories (SMCCs) <cit.>.
, and are combinators expressing composition, exchanging and identity operations respectively, and -algebras correspond to the linear lambda calculus by the combinatory completeness.
These results for PCAs and -algebras are used as an useful method to giving various models based on CCCs and SMCCs.
On the other hand, categorical realizability based on non-symmetric structures has been less investigated.
In our previous studies <cit.>, we proposed “planar realizability" giving rise to non-symmetric categorical structures, such as closed multicategories, closed categories, skew closed categories and monoidal bi-closed categories.
The aim of this paper is to summarize and develop these results.
First in section <ref>, we start with recalling basic notions of categorical realizability.
Results of PCAs and -algebras are shown in the section.
Also notions of applicative morphisms and linear combinatory algebras (LCAs) are recalled from <cit.>, that are used to obtain models of linear exponential modalities on linear calculus.
Basic knowledge of category theory and the lambda calculus is assumed and not referred here.
Next in section <ref>, we introduce several classes of applicative structures inducing non-symmetric categorical structures.
Realizing non-symmetric closed structures is a more subtle problem than the symmetric cases like CCCs and SMCCs.
Since the -combinator in -algebras induces the symmetry of the monoidal structure on the category of assemblies, one may think we can obtain non-symmetric categorical structures by excluding the -combinator.
However, simply excluding the -combinator leads no interesting categorical structures like internal hom functors, since realizing closed structures needs some exchanging of realizers even if the closed structures are not symmetric.
We have to give applicative structures with appropriately weakened exchanging that realizes internal hom structures but does not realize symmetries.
To resolve this problem, in <cit.>, we introduced a unary operation () on an applicative structure, which allows restricted exchanging.
In section <ref> and <ref>, we recall these results, that -algebras induce (non-symmetric) closed multicategories and -algebras induce closed categories.
By the combinatory completeness, these classes of applicative structures correspond to the planar lambda calculus.
By the unary operation (), we obtain non-symmetric closed structures, however, this operation is not sufficient to obtain non-symmetric monoidal structures.
Assume that on a -algebra has tensor products.
When we take realizers of tensor products of in the same way that we take realizers of Cartesian/tensor products of assemblies on PCAs/-algebras, the realizer of unitors of leads a realizer of the symmetry.
That is, this attempt to get non-symmetric tensor products from -algebras ends in failure that the tensor products are symmetric.
Here what matters is that the way realizing products of assemblies on PCAs/-algebras corresponds to the representation of tensor products
X ⊗ Y ≅∀α. (X Y α) α
in the second-order linear logic ( <cit.>), which is valid only if the tensor is symmetric.
Thus, categorical realizability for non-symmetric monoidal structures needs some modification on the way realizing tensor products.
In this paper, we give two answers for this problem.
One is the way preparing a new combinator which directly realizes pairings.
The class of applicative structures, -algebras, is newly introduced in this paper and give rise to non-symmetric monoidal closed categories.
We show results about -algebras in section <ref>.
The other way is taking realizers of tensor products matching the representation
X ⊗ Y ≅∀α. (α Y X) α
in the second-order linear logic, which is valid even in the non-symmetric case.
To give such realizers, the class of applicative structures, bi--algebras, was introduced in <cit.>.
Bi--algebras feature two kinds of applications corresponding to two kinds of implications and , and have the combinatory completeness for the lambda calculus with two kinds of applications (which we call the bi-planar lambda calculus in this paper).
In section <ref>, we recall these results about bi--algebras.
Classes of applicative structures appearing in this paper are summarized in Table <ref>.
Also combinators and operations are summarized in Table <ref>.
The classes of applicative structures in this paper form a hierarchy as summarized in Table <ref>.
In section <ref>, we show that these classes are different from each other.
To show the strictness of the inclusion, it is sufficient to give examples belonging to one side and not to the other side, and we give such examples in section <ref>.
While these proofs in section <ref> are mostly straightforward and not conceptually new, sometimes it is not easy to show that some applicative structure does not belong to some class of applicative structures.
As such an example, in section <ref>, we show that the untyped planar lambda calculus (with no constants) is not a bi--algebra.
In the next section <ref>, we give the computational lambda calculus <cit.> as a rather unexpected example of a -algebra and show the computational lambda calculus is not a bi--algebra.
To better clarify the relationship between applicative structures and categorical structures of categories of assemblies, in section <ref>, we show certain “inverses" of propositions shown in section <ref>.
That is, assuming has certain categorical structure (such as being an SMCC), we show belongs to the corresponding class (such as -algebras) under several conditions.
While the propositions for the cases of -algebras and -algebras were already presented in <cit.>, those for the cases of -algebras and bi--algebras are newly shown in this paper.
By integrating results of section <ref>, <ref> and <ref>, we can say that, for instance, the category of assemblies on the planar lambda calculus indeed has non-symmetric closed structure.
In section <ref>, we reformulate notions of LCAs for our -algebras.
Although linear exponential comonads are usually defined as comonads on symmetric monoidal categories, we can also define linear exponential comonads on non-symmetric monoidal categories <cit.>.
In <cit.>, we defined exponential relational planar linear combinatory algebras (exp-rPLCAs) as pairs of a bi--algebra and an applicative endomorphism on it, that give rise to linear exponential comonads on (non-symmetric) monoidal bi-closed categories.
The definition of exp-rPLCAs in <cit.> are the reformulation of the definition of (relational) LCAs to bi--algebras.
In this paper, we generalize exp-rPLCAs a bit by changing “bi--algebras" to “-algebras," and then similarly call the generalized ones as exp-rPLCAs.
New exp-rPLCAs give rise to linear exponential comonads on (non-symmetric) monoidal closed categories, and correspond to adjoint pairs of applicative morphisms between -algebras and PCAs.
There are also modalities on (non-symmetric) linear calculus other than the linear exponential modality.
The exchange modality, investigated in <cit.>, is a modality connecting a commutative logic and a non-commutative logic (the Lambek calculus).
Categorical models of the exchange modality are given as monoidal adjunctions between monoidal bi-closed categories and SMCCs, which are called Lambek adjoint models.
In <cit.>, we defined exchange relational planar linear combinatory algebras (exch-rPLCAs) that give rise to Lambek adjoint models.
In this paper, like exp-rPLCAs, we reformulate exch-rPLCAs for -algebras.
New exch-rPLCAs correspond to adjoint pairs between -algebras and -algebras, and give rise to monoidal adjunctions between (non-symmetric) monoidal closed categories and SMCCs, that are models of the exchange modality based on the non-symmetric multiplicative intuitionistic linear logic (that is, a fragment of the Lambek calculus without bi-closedness).
Finally in section <ref> and <ref>, we discuss related work, summarize conclusion and describe future work.
§ BACKGROUND
§.§ Applicative structures and categories of assemblies
First we recall basic notions of the categorical realizability.
Notations and definitions in this subsection are from <cit.>.
A partial applicative structure is a pair of a set and a partial binary operation (x ,y) ↦ x · y on .
When the binary operation is total, we say is a total applicative structure.
We often omit · and write x · y as x y simply.
We also omit unnecessary parentheses assuming that application joins from the left.
For instance, x y (z w) denotes (x · y) · (z · w).
In the sequel, we use two notations “↓” and “≃.”
We write x y ↓ for that x · y is defined.
“≃” denotes the Kleene equality, which means that if the one side of the equation is defined then the other side is also defined and both sides are equal.
Let be a partial applicative structure.
* An assembly on is a pair X = (|X|,_X), where |X| is a set and _X is a function sending x ∈ |X| to a non-empty subset x_X of .
We call elements of x_X realizers of x.
* For assemblies X and Y on , a map of assemblies f:X Y is a function f:|X| |Y| such that there exists an element r ∈ realizing f.
Here we say “r realizes f” or “r is a realizer of f” if r satisfies that
∀ x ∈ |X|, ∀ a ∈x_X, r a ↓ and r a ∈f(x)_Y.
If we assume two additional conditions on a partial applicative structure, we can construct two kinds of categories.
Let be a partial applicative structure satisfying that:
* has an element such that ∀ x ∈, x ↓ and x = x;
* for any r_1, r_2 ∈, there exists r ∈ such that ∀ x ∈, r x ≃ r_1 (r_2 x).
Then we construct categories as follows.
* The category , called the category of assemblies on , consists of assemblies on as its objects and maps of assemblies as its maps.
Identity maps and composition maps are the same as those of (the category of sets and functions).
* We call an assembly X a modest set on if X satisfies
∀ x,x' ∈ |X|, x ≠ x' ⇒x_X ∩x'_X = ∅.
The category , called the category of modest sets on , is the full subcategory of whose objects are modest sets on .
We need above two conditions <ref> and <ref> to give realizers of the identities and composition maps.
Identities are realized by .
For maps f_1:Y Z realized by r_1 and f_2:X Y realized by r_2, we obtain r given by the condition <ref>, which realizes f_1 ∘ f_2.
Since all the classes of applicative structures introduced later satisfy these conditions, the conditions are not be much problems in this paper.
Intuitively, the category (and ) can be understood as the category of “-computable universe.”
For an assembly X=(|X|,_X) on , elements of x_X can be seen as “machine-level interpretations” of x ∈ |X|.
For a map f:X Y of , the realizer r of f can be seen as “machine implementation” of f, since r takes interpretations of x (that is, elements of x_X) as input and computes interpretations of f(x) (that is, elements of f(x)_Y).
§.§ PCAs and Cartesian closed categories
Since is the category of -computable universe, the structure of depends on the computational structure of .
When applicative structures belong to a specific class, specific categorical structures may be found on the categories of assemblies.
The best known such class is the class of PCAs, which induce Cartesian closed categories of assemblies.
Results in this subsection are from <cit.>.
A partial combinatory algebra (PCA) is a partial applicative structure which contains two special elements and such that:
* ∀ x,y ∈, x ↓, x y ↓ and x y = y;
* ∀ x,y,z ∈, x ↓, x y ↓ and x y z ≃ x z (y z).
When a PCA is a total applicative structure, we say is an -algebra.
The most fundamental example of PCAs is the untyped lambda calculus.
Suppose infinite supply of variables x,y,z,…. Untyped lambda terms are terms constructed from the following six rules:
(identity)
x ⊢ x
Γ⊢ M
Δ⊢ N
(application)
Γ , Δ⊢ MN
Γ , x ⊢ M
(abstraction)
Γ⊢λ x.M
Γ , x , y, Δ⊢ M
(exchange)
Γ , y, x, Δ⊢ M
Γ , x , y ⊢ M
(contraction)
Γ , x ⊢ M[x/y]
Γ⊢ M
(weakening)
Γ, x ⊢ M
Here, in the application rule, Γ and Δ are sequences of distinct variables and contain no common variables.
In the contraction rule, M[x/y] denotes the term obtained by substituting x for all free y in M.
In the weakening rule, x is a variable not contained in Γ.
Note that abstraction rules are only applied to the rightmost variables. In order to
apply the abstraction rule to a variable in a different position, we need to use the exchange rule several times and move the variable to the rightmost place.
We define β-equivalence relation on lambda terms as the congruence of the relation (λ x.M)N ∼ M[N/x].
Untyped lambda terms modulo =_β form a PCA (actually an -algebra). The underlying set of the PCA consists of β-equivalence classes of untyped closed lambda terms (i.e., lambda terms with no free variables) and the application is defined as that of lambda terms. In this example, λ xyz.xz(yz) is the representative of and λ xy.x is the representative of .
The correspondence between PCAs and the lambda calculus is more than just an example. PCAs have an important property called the combinatory completeness, which gives interpretations of “computable functions” on by elements of itself.
First, we give the definition of polynomials over an applicative structure (not restricted to PCAs).
Let be a partial applicative structure.
A polynomial over is a syntactic expression generated by variables, elements of and the application of .
For two polynomials M and N over , M ≃ N means that
M[a_1/x_1,… ,a_n/x_n] ≃ N[a_1/x_1,… ,a_n/x_n]
holds in for any a_1,… ,a_n ∈, where { x_1,… ,x_n } contains all the variables of M and N.
Let be a PCA and M be a polynomial over .
For any variable x, there exists a polynomial M' such that the free variables of M' are the free variables of M excluding x and M' a ≃ M[a/x] holds for all a ∈.
We write such M' as x.M.
We define x.M by induction on the structure of M.
* x.x :=
* x.y := y (when x ≠ y)
* x.MN := ( x.M)( x.N)
For the special case of the above proposition, any closed lambda term is β-equivalent to some term constructed from λ xy.x and λ xyz.xz(yz) using applications.
Using the combinatory completeness, we can give (and ) on a PCA the structure of Cartesian closed category (CCC).
When is a PCA, and are CCCs.
While this result is standard, we shall outline its proof for comparison with the parallel results on various classes of combinatory algebras to be developed in this paper.
First we prove the proposition for .
Let :=.
* By the combinatory completeness, has elements x.x and xyz.x(yz), which make satisfying the conditions <ref> and <ref> of Definition <ref>.
Thus is a category.
* For objects X and Y, the underlying set of the binary product X × Y is |X| × |Y|.
Realizers are defined as
(x,y)_X × Y := { t.tp q | p ∈x_X, q ∈y_Y }.
* For maps f:X X' realized by r_f and g:Y Y' realized by r_g, f × g is the function sending (x, y) to (f(x),g(y)).
A realizer for f × g does exists as u.u ( pqt.t(r_f p)(r_g q)).
* The underlying set of the terminal object 1 is the singleton {∗}.
Realizers are ∗_1 :=.
It is easy to see that this 1 satisfy the conditions of the terminal object.
* The projection π:X × Y X is the function sending (x,y) to x and has a realizer u.u( pq.p).
The projection π':X × Y Y is the function sending (x,y) to y and has a realizer u.u( pq.q).
It is easy to see that these π and π' satisfy the conditions of the projections of the Cartesian category.
* For objects X and Y, the underlying set of the exponential Y^X is _(X,Y).
Realizers are
f_Y^X := { r ∈|}.
* For maps f:X' X realized by r_f and g:Y Y' realized by r_g, the map g^f is the function sending a map h ∈_(X,Y) realized by r_h to g ∘ h ∘ f ∈_(X',Y') realized by v.r_g (r_h (r_f v)).
A realizer of g^f is uv.r_g (u (r_f v)).
* The adjunction Φ: (X × Y,Z) (X,Z^Y) is the function sending f:X × Y Z realized by r_f to the map Φ(f):x ↦ (y ↦ f(x,y)).
Φ(f) is realized by pq.r_f ( t.tpq).
For a map g:X Z^Y realized by r_g, Φ^-1(g):X × Y Z is the map sending (x,y) to g(x)(y).
Φ^-1(g) is realized by u.u r_g.
It is easy to see that this Φ satisfies the condition of the adjunction of the CCC.
Therefore, is a CCC.
Next we show that is a CCC.
Given modest sets X and Y on , we define the binary product X × Y in the same way as .
Here we can show that X × Y also is a modest set.
Suppose there is some a ∈ realizing different (x,y) and (x',y') of |X| × |Y|.
When we assume x ≠ x', though π (x,y) ≠π (x',y'), both sides have the same realizer ( u.u( pq.p))a.
It contradicts that X is a modest set.
The same contradiction is lead when y ≠ y'.
Therefore, different (x,y) and (x',y') do not have common realizers and X × Y is a modest set.
For modest sets X and Y on , we also define Y^X in the same way as .
We can show that Y^X also is a modest set.
Suppose there is some r realizing different f:X Y and g:X Y.
Take x ∈ |X| and a ∈x_X such that f(x) ≠ g(x).
Then r a is an element of both f(x)_Y and g(x)_Y.
However, it contradicts that Y is a modest set.
Therefore, Y^X is a modest set.
Hence, we can show that is a CCC by the same proof for .
In this proof, we use the combinatory completeness for the PCA a lot to give realizers for each assembly and map.
§.§ -algebras and symmetric monoidal closed categories
Given an applicative structure which has the different computational structure from PCAs, we obtain with a different categorical structure from CCCs.
In this subsection, we recall another well-known class of applicative structures called -algebras, which correspond to linear structures.
Results given in this subsection are from <cit.>.
A -algebra is a total applicative structure which contains three elements , and such that ∀ x, y, z ∈, x y z = x (y z), x y z = x z y and x = x.
Untyped linear lambda terms are untyped lambda terms constructed without using weakening and contraction rules (See Example <ref>).
That is, an untyped linear lambda term is an untyped lambda term whose each variable appears just once in the term.
Untyped closed linear lambda terms modulo form a -algebra.
Here λ xyz.x(yz), λ xyz.xzy and λ x.x are the representatives of , and respectively.
Let be a -algebra and M be a polynomial over .
For any variable x appearing exactly once in M, there exists a polynomial x.M such that the free variables of x.M are the free variables of M excluding x and ( x.M) a = M[a/x] for all a ∈.
We define x.M by induction on the structure of M.
* x.x :=
* x.MN := ( x.M) N (x ∈ FV(M))
M ( x.N) (x ∈ FV(N))
The combinatory completeness for a -algebra allows interpreting only linear lambda terms, not the whole of lambda terms.
Thus some realizers used in the proof of Proposition <ref> (such as u.u( pq.p)) may not exist in a -algebra.
For -algebras, the categories of assemblies have other categorical structure than CCCs.
When is a -algebra, is a symmetric monoidal closed category (SMCC).
* For objects X and Y, the underlying set of X ⊗ Y is |X| × |Y|.
Realizers are defined as x ⊗ y_X ⊗ Y := { t.tp q | p ∈x_X, q ∈y_Y }.
* For maps f:X X' realized by r_f and g:Y Y' realized by r_g, the map f ⊗ g is the function sending x ⊗ y to f(x) ⊗ g(y),
which is realized by u.u ( pqt.t(r_f p)(r_g q)).
* The underlying set of the unit object I is the singleton {∗}.
The realizer is ∗_I := {}.
* The right unitor ρ_X : X X ⊗ I is the function sending x to x ⊗∗, which is realized by p.( t.tp).
The inverse ρ^-1 is realized by u.u( pq.qp).
* Also we can take the left unitor λ_X : I ⊗ X X as the function (∗⊗ x)↦ x and the associator
α_XYZ : X ⊗ (Y ⊗ Z) (X ⊗ Y) ⊗ Z as x ⊗ (y ⊗ z) ↦ (x ⊗ y) ⊗ z.
* The symmetry σ_XY : X ⊗ Y Y ⊗ X is the function sending x ⊗ y to y ⊗ x, which is realized by u.u( pqt.tqp).
* For objects X and Y, the underlying set of the exponential[For an SMCC , the exponential is often denoted using the symbol satisfying
(X⊗ Y,Z)≅(X,Y Z).
However, here we use the reversed symbol satisfying (X⊗ Y,Z)≅(X,Z Y) to be consistent with the notation of monoidal bi-closed categories in Section <ref>.]
Y X is _(X,Y).
Realizers are f_Y X := { r ∈|}.
* For maps f:X' X realized by r_f and g:Y Y' realized by r_g, the map g f is the function sending h:X Y to g ∘ h ∘ f :X' Y'.
A realizer of g f is uv.r_g (u (r_f v)).
* The adjunction Φ sends a map f:X ⊗ Y Z to the map Φ(f):x ↦ (y ↦ f(x ⊗ y)).
It is easy to see that the above components satisfy the axioms of the SMCC.
The above proof is almost the same as the proof of on a PCA being a CCC.
However, when we prove that on a -algebra is an SMCC, we cannot use the same proof as for PCAs.
That is because for modest sets X and Y on a -algebra , X ⊗ Y given by the same way as is not generally a modest set.
The following proposition is proven with a modification to resolve the problem.
When is a -algebra, is an SMCC.
Let G: ↪ be the inclusion functor and F: be the left adjoint of G.
F is the functor sending an assembly X = (|X|,_X) to a modest set Z = (|X|/≈,_Z).
Here the relation “≈” is the transitive closure of the relation “∼” defined as x ∼ x' :⇔ x_X ∩x'_X ≠∅.
The realizers of z ∈ |Z| are defined as z_Z := ⋃_x ∈ zx_X.
F sends a map f of to the canonical map of , which is realized by realizers of f.
We define the tensor product ⊠ in as X ⊠ Y := F(GX ⊗ GY).
We can prove Proposition <ref> by the same proof of Proposition <ref> by replacing ⊗ to ⊠.
More general about constructing monoidal structures on reflexive full subcategories, see <cit.>.
While we define -algebras as a class of total applicative structures, we also can define “partial -algebras” naturally.
For a partial -algebra , we can see that:
* is not generally an SMCC;
* adding an extra element (which means “undefined”), naturally extends to a total -algebra _;
* is the full subcategory of _.
The same discussion is given in <cit.>.
§.§ Applicative morphisms
In this subsection, we recall the notion of applicative morphisms from <cit.>.
Let be a partial applicative structure satisfying:
* has an element such that ∀ x ∈, x ↓ and x = x;
* for any r_1, r_2 ∈, there exists r ∈ such that ∀ x ∈, r x ≃ r_1 (r_2 x).
Let be another partial applicative structure satisfying the same conditions.
An applicative morphism γ : is a total relation from to such that there exists a realizer r_γ∈ of γ satisfying that
∀ a, a' ∈, ∀ b ∈γ a, ∀ b' ∈γ a', r_γ b b' ∈γ (a a') whenever a a' ↓.
We say γ is functional when γ a is a singleton for each a ∈, and simply write γ a = b for γ a = { b }.
Our definition is slightly more general than the definition in <cit.> that makes sense only on PCAs.
We define applicative morphisms between applicative structures satisfying the conditions of Definition <ref>.
We assume these conditions to realize identity and composition morphisms.
By the condition <ref>, the identity applicative morphism id: can be realized by .
For applicative morphisms γ: and δ: 𝒞 realized by r_γ and r_δ, taking p ∈δ r_γ, the composition δ∘γ can be realized by r ∈ || such that ∀ b ∈ ||, r b ≃ r_δ (r_δ p b).
The condition <ref> gives such a realizer r.
In the sequel, for an applicative morphism γ, when we write an indexed element r_γ, it denotes a realizer of γ.
Also, for a ∈ and S,S' ⊆,
when we write a S, it denotes the set { as | s ∈ S } and we consider as↓ for all s ∈ S,
and when we write S S', it denotes the set
{ ss'| s ∈ S,s' ∈ S' } and we consider ss' ↓ for all s ∈ S and s' ∈ S'.
For instance, the condition that γ is an applicative morphism is denoted as
∃ r_γ∈, ∀ a,a' ∈, aa' ↓⇒ r_γ (γ a)(γ a') ⊆γ (aa').
From applicative morphisms, we can obtain functors between the categories of assemblies.
For an applicative morphism γ:, : is the functor sending an object (|X|, _X) to (|X|,γ_X) and sending a map to the same function.
For a map f in realized by r_f, f is realized by elements of r_γ (γ r_f).
It is obvious that satisfies (id)=id and (g ∘ f) = (g) ∘(f).
Next we recall the preorder relation ≼ between applicative morphisms.
For two applicative morphisms γ, δ:, γ≼δ iff there is r ∈ such that ∀ a ∈, r (γ a) ⊆δ a.
Using the conditions <ref> and <ref> of Definition <ref>, we can easily show that ≼ is a preorder.
By the preorder ≼, we can define adjunctions and comonads on applicative structures.
For two applicative morphisms γ: and δ:, γ is a right adjoint of δ iff δ∘γ≼ id_ and id_≼γ∘δ.
We write (δ⊣γ): for these settings.
An applicative morphism γ: is called comonadic when has two elements and such that ∀ a ∈, (γ a) ⊆{ a } and (γ a) ⊆γ (γ a).
For adjunctions of applicative morphisms, the following properties hold.
* An adjoint pair of applicative morphisms (δ⊣γ): gives rise to an adjoint pair (⊣) :.
* For an adjoint pair of applicative morphisms (δ⊣γ):, δ∘γ : is a comonadic applicative morphism.
* For a comonadic applicative morphism γ:, is a comonad on .
In Definition <ref>, an applicative morphism γ: gives rise to the functor :.
However, here we cannot generally obtain a functor : since x_X ∩x'_X = ∅ does not imply γ(x_X) ∩γ(x'_X) = ∅ and X may not be in .
However, for a comonadic applicative morphism γ:, can be restricted to the endofunctor on .
Indeed, for a modest set X on , if a ∈γ(x_X) ∩γ(x'_X) then
a is an element of x_X ∩x'_X and thus x = x' concludes.
Furthermore, this is a comonad on .
§.§ Linear combinatory algebras
In the previous subsection, we saw comonadic applicative morphisms give rise to comonads, and adjoint pairs of applicative morphisms give rise to adjoint pairs between categories of assemblies.
Using this construction, we can obtain linear exponential comonads and linear-non-linear models for the linear logic.
In this subsection, we recall notions of linear combinatory algebras (LCAs) from <cit.> and relational linear combinatory algebras (rLCAs) from <cit.>.
A linear combinatory algebra (LCA) consists of:
* a -algebra ;
* a functional comonadic applicative morphism (, , ) on ;
* an element ∈ such that ∀ x, y ∈, x ( y) = x;
* an element ∈ such that ∀ x, y ∈, x ( y) = x ( y)( y).
As we get comonads from comonadic applicative morphism, from LCAs, we get linear exponential comonads, which are categorical models of the linear exponential modality of the linear logic.
Let be a symmetric monoidal category.
A linear exponential comonad consists of the following data.
* A symmetric monoidal comonad (!, δ, ϵ, m, m_I).
Here ! is an endofunctor on ,
δ_X :!X !!X and ϵ_X :!X X are monoidal natural transformations for the comultiplication and the counit.
The natural transformation m_X,Y:!X ⊗ !Y !(X ⊗ Y) and the map m_I:I !I make ! be a monoidal functor.
* Monoidal natural transformations e_X:!X I and d_X:!X !X ⊗ !X.
Here these components need satisfy the following conditions for each X.
* (!X,d_X,e_X) is a commutative comonoid in .
* e_X and d_X are coalgebra morphisms.
* δ_X is a comonoid morphism.
For an LCA (,), is a linear exponential comonad on the SMCC (or ).
LCAs can be generalized from functional applicative morphisms to not functional ones, called rLCAs.
A relational linear combinatory algebra (rLCA) consists of:
* a -algebra ;
* a comonadic applicative morphism (, , ) on such that ≼ [ ,] and ≼ k_i.
Here [,] and k_i are applicative morphisms defined as [ ,] (x) := { t.ta a' | a,a' ∈ x } and k_i (x) := {}.
Next proposition shows the correspondence between LCAs, rLCAs and adjoint pairs between -algebras and PCAs.
* Let be a -algebra and be a PCA.
For an adjoint pair
(δ⊣γ):, (, δ∘γ) is an rLCA.
* Let (,) be an LCA.
The applicative structure _ = (, @) defined by x @ y := x ( y) is a PCA.
Furthermore, γ: _ defined as the identity function and δ :_ sending a ∈ to a form an adjoint pair (δ⊣γ):_.
From rLCAs, we also get linear exponential comonads.
Moreover, we get linear-non-linear models <cit.> on categories of assemblies or categories of modest sets.
A linear-non-linear model is a symmetric monoidal adjunction
(F ⊣ G): for an SMCC and a CCC .
For an rLCA (,), is a linear exponential comonad on the SMCC (or ).
Furthermore, the co-Kleisli adjunction between and _ (or and _) is symmetric monoidal.
Thus the adjunction forms a linear-non-linear model.
§ CONSTRUCTING NON-SYMMETRIC CATEGORICAL STRUCTURES
In section <ref>, we saw two known results that PCAs/-algebras induce CCCs/SMCCs as the categories of assemblies and the categories of modest sets.
It is natural to try to extend these results to other classes of applicative structures, and we introduce such new classes inducing certain “non-symmetric” categorical structures.
In this section we recall -algebras, -algebras and bi--algebras from <cit.>, and introduce a new class -algebras.
§.§ -algebras and closed multicategories
When we try to obtain some non-symmetric categorical structures on categories of assemblies, we will find a subtle problem.
In a -algebra , the -combinator expresses exchanging the order of arguments, and is the source of the symmetric structures of .
So one might guess that simply omitting would be sufficient for getting a non-symmetric categorical structure on .
However, this does not work well; and alone are too weak to give an interesting structure on .
For instance, if we want the internal hom functor (- -) on on a total applicative structure , we need certain exchanging operation in even if the closed structure is not symmetric.
Take an object A of as |A| := and a_A := { a }.
For maps f,g:A A, to realize g f, we need a realizer r which satisfies ∀ a, a' ∈, r a a' = r_g (a (r_f a')).
This r acts as the exchanging to move the information of r_f from the left of a to the right of a.
(In a -algebra, such r exists as ( ( r_g))r_f.)
Therefore, when we want some non-symmetric categorical structures such as non-symmetric closed structures, we need to prepare some “more restricted exchanging” than the -combinator.
One way to resolve the problem is to supply not a combinator but the unary operation () for exchanging.
In this subsection, we introduce -algebras from <cit.>, which induce non-symmetric closed multicategories.
A total applicative structure is a -algebra iff it contains , and a for each a ∈, where a is an element of such that ∀ x ∈, a x = x a.
This () enable restricted exchanges than the -combinator.
Since in a -algebra, a satisfies the axiom of a, all -algebras are also -algebras.
The definition of -algebras may seem strange compared to the definitions of PCAs or -algebras.
However, the definition of -algebras is natural in the aspect of having a good correspondence with the “planar" lambda calculus.
Untyped planar lambda terms are untyped lambda terms constructed without using weakening, contraction nor exchange rules (See Example <ref>).
That is, untyped planar lambda terms are untyped linear lambda terms such that for each subterm λ x.M, x is the rightmost free variable of M.
Untyped closed planar lambda terms modulo form a -algebra, which we call in this paper.
Here λ xyz.x(yz) and λ x.x are the representatives of and respectively. Given a representative M of a ∈ ||, λ x.xM is also a closed planar term and is the representative of a.
The definition of construction rules of planar lambda terms has two different styles. In our definition, the abstraction rule is only allowed for the rightmost variable. Such a style is seen in <cit.>. On the other hand, there is also the definition that the abstraction rule is only allowed for the leftmost variable, as in <cit.>. Here we employ the former style for preservation the planarity of terms under the βη-conversions.
Let be a -algebra and M be a polynomial over .
For the rightmost variable x of M, if x appears exactly once in M, there exists a polynomial x.M such that the free variables of x.M are the free variables of M excluding x and ( x.M) a = M[a/x] for all a ∈.
We define x.M by induction on the structure of M.
* x.x :=
* x.MN := N ( x.M) (x ∈ FV(M))
M ( x.N) (x ∈ FV(N))
Note that for x.MN, x is the rightmost free variable in MN, and thus, if x is in FV(M), N has no free variables and N can be defined.
Then we show -algebras induce certain categorical structures on the categories of assemblies.
First we recall the definition of closed multicategories from <cit.>.
A multicategory consists of the following data:
* a collection Ob();
* for each n ≥ 0 and X_1 , X_2 , … , X_n , Y ∈ Ob(), a set (X_1 , … , X_n ; Y).
We often write f ∈ (X_1 , … , X_n ; Y) as f:X_1 , … , X_n Y;
* for each X ∈ Ob(), an element id_X ∈ (X ; X), called the identity map;
* for each n, m_1 , m_2 , … , m_n ∈ℕ and X^k_j , Y_k , Z (1 ≤ k ≤ n , 1 ≤ j ≤ m_k), a function
∘ : (Y_1 , … , Y_n ; Z) ×∏_k^n (X^k_1 , … , X^k_m_k ;Y_k) (X^1_1 , … ,X^1_m_1 ,X^2_1 , … , X^n_m_n ; Z)
called the composition. g ∘ (f_1 , … , f_n) denotes the composition of g ∈ (Y_1 , … , Y_n ; Z) and f_k ∈ (X^k_1 , … , X^k_m_k ; Y_k) (1 ≤ k ≤ n).
The compositions satisfy associativity and identity axioms.
A closed multicategory consists of the following data:
* a multicategory ;
* for each X_1 , X_2 , … , X_n , Y ∈ Ob(), an object (X_1 , X_2 , … , X_n ; Y), called the internal hom object;
* for each X_1 , … , X_n , Y ∈ Ob(), a map
ev_X_1 , … , X_n ; Y : (X_1 , … , X_n ; Y), X_1
, … , X_n → Y,
called the evaluation map such that ∀ Z_1 , Z_2 , … , Z_m ∈ Ob(), the function
ϕ_Z_1 , … , Z_m ; X_1 , … , X_n ; Y : ( Z_1 , … , Z_m ;
(X_1 , … , X_n ; Y) ) →(Z_1 , … , Z_m, X_1 , … , X_n ; Y)
sending f to ev_X_1 , … , X_n ; Y∘ (f, id_X_1 , … , id_X_n ) is invertible.
We write the inverse function Λ_Z_1 , … , Z_m ; X_1 , … , X_n ; Y.
Here our definition of closed multicategories is different from the original definition in <cit.> in that the order of objects of domain of maps are reversed.
This is for ease to read by matching the orders of objects and realizers.
When is a -algebra, and are closed multicategories.
Let :=.
Since have the -combinator and the -combinator, is a category.
First we give a bi-functor (- -):^op× as follows:
* For X,Y ∈, Y X is an assembly whose underlying set is _(X,Y) and f_Y X := { r |}.
* For two maps f:X' X and g:Y Y' in , (g f):(Y X) (Y' X') is the function sending h ∈_(X,Y) to g ∘ h ∘ f.
Given realizers r_f of f and r_g of g, (g f) is realized by
uv.r_g (u (r_f v)).
Thus, for any maps f and g in , (g f) certainly is a map of .
It is easy to see that (- -) preserves identities and compositions.
Next we give the structure of closed multicategory.
* For an object X ∈, (;X) := |X| and (;X) := X.
* For objects X_1, X_2, … , X_n, Y ∈ (n ≥ 1), we define the internal hom object
(X_1,… ,X_n ;Y) := (… ((Y X_n) X_n-1)… ) X_1
and (X_1,… ,X_n ;Y) is the underlying set of (X_1,… ,X_n ;Y).
We write f(x_1)(x_2)… (x_n) as f(x_1,… ,x_n) for f ∈(X_1,… ,X_n ;Y) and x_i ∈ |X_i|.
* Identity maps id_X ∈(X;X) (X ∈) are the same as identity maps of .
* Suppose maps g∈(Y_1,… ,Y_n;Z) and f_k ∈(X^k_1,… ,X^k_m_k;Y_k) (1 ≤ k ≤ n).
We define g ∘ (f_1,… ,f_n) as the function that receives
x^1_1,… , x^1_m_1 ,… , x^n_1 ,… , x^n_m_n
and returns g(f_1(x^1_1,… ,x^1_m_1) ,… , f_n(x^n_1,… ,x^n_m_n)).
Here when m_i = 0 for some 1 ≤ i ≤ n, we define
g ∘ (f_1,… ,f_n) by giving y_i ∈ |Y_i| pointed by f_i ∈(;Y_i) as the i-th argument of g.
Given realizers q ∈g_(Y_1,… ,Y_n;Z) and p_k ∈f_k_(X^k_1,… ,X^k_m_k;Y_k), by the combinatory completeness for -algebras, there is r ∈ such that
r a^1_1 … a^1_m_1… a^n_1 … a^n_m_n = q (p_1 a^1_1 … a^1_m_1)… (p_n a^n_1… a^n_m_n)
holds for any a^1_1,… ,a^n_m_n∈.
This r realizes g ∘ (f_1,… ,f_n) and thus g ∘ (f_1,… ,f_n) is in (X^1_1,… ,X^n_m_n;Z).
* The evaluation map ev_X_1 ,… , X_n ; Y : (X_1 ,… , X_n ; Y), X_1 ,… , X_n Y is given as the function that receives f,x_1,… ,x_n and returns f(x_1,… ,x_n), which is realized by .
Then, ϕ_Z_1,… ,Z_m;X_1,… ,X_n;Y is invertible as a function and for g ∈(Z_1,… ,Z_m,X_1,… ,X_n;Y), Λ (g) is indeed in (Z_1,… ,Z_m;(X_1,… ,X_n;Y)) since it is realized by realizers of g.
Therefore, is a closed multicateogry.
For , we can use the same proof as for .
While we define -algebras as a class of total applicative structures, we also can define “partial -algebra” naturally.
For a partial -algebra , () is a total unary operation on such that ∀ a, x ∈, a x ≃ x a.
Unlike the case of partial -algebras as in Remark <ref>, the proof of Proposition <ref> is applicable to the case of partial -algebras.
§.§ -algebras and closed categories
In this subsection, we recall a class of applicative structures from <cit.>, which induce closed categories of assemblies and modest sets.
First we recall the definition of closed categories in <cit.>.
A closed category consists of the following data:
* a locally small category ;
* a functor (- -): ^op×, called the internal hom functor[While the internal hom object in the closed category is often written as (X,Y), [X,Y] or X Y, here we denote Y X to be consistent with other categorical structures in this paper.];
* an object I, called the unit object;
* a natural isomorphism i_X : (X I) X;
* an extranatural transformation j_X : I (X X);
* a transformation L_Y,Z^X : (Z Y) ((Z X) (Y X)) natural in Y and Z and extranatural in X,
such that the following axioms hold:
* ∀ X,Y ∈, L_Y,Y^X ∘ j_Y = j_(Y X);
* ∀ X,Y ∈, i_(Y X)∘ (id_(Y X) j_X) ∘ L_X,Y^X = id_(Y X);
* ∀ X,Y,Z,W ∈, the following diagram commutes:
@C=-25pt@R=30pt
(W Z) [dl]_L_Z,W^X [dr]^L_Z,W^Y
(W X) (Z X) [d]^-L_(Z X),(W X)^(Y X)
((W Y) (Z Y)) [dd]^-L_Y,W^X id
((W X) (Y X)) ((Z X) (Y X)) [drr]_-id L_Y,Z^X
((W X) (Y X)) (Z Y)
* ∀ X,Y ∈, L_X,Y^I ∘ (i_Y id_X) = id_(Y I) i_X;
* ∀ X,Y ∈, the function γ : (X,Y) (I , (Y X)) sending f:X Y to
(f id_X) ∘ j_X is invertible.
Closed categories are something like monoidal closed categories without tensor products. That is, categories with internal hom functors which are defined directly, not via tensor products and adjunctions.
The structures of closed categories are very similar to the structures of closed multicategories.
As shown in <cit.>, the category of closed categories are cat-equivalent to the category of closed multicategories with unit objects.
However, when we want to construct (non-symmetric) closed categories as categories of assemblies, it is not sufficient that the applicative structures are -algebras, since realizers for i^-1_X : X (X I) may not exist.
Thus, we add another condition to a -algebra to realize i^-1_X and obtain the following definition.
A -algebra is a -algebra which contains an element such that ∀ a ∈, a = a.
In -algebras, the role we expect to is to eliminate the “harmless" second argument, which does not necessarily eliminate .
Even without specifying , we can define the same class as -algebras.
For instance, for a -algebra , suppose there is ^×∈ such that ∀ a ∈, ^× a = a.
Then this is a -algebra since xy.^× x (y ) satisfies the axiom of .
Conversely, for a -algebra, we can take ^× := xy. x (y ) and thus -algebras and ^×()-algebras are the same classes.
in Example <ref> is a -algebra.
Since the planar lambda calculus has the strongly normalizing property, for any closed planar term M, there are some u and N such that M λ u.N. Then
(λ xyz.x(yz)) M (λ v.v) λ z.M((λ v.v)z)
λ z.(λ u.N)z
λ z.N[z/u]
=_α M
and thus λ xyz.x(yz) represents .
Since (which nicely corresponds to -algebras) is also a -algebra, one might suspect that -algebras and -algebras are the same class.
However, these two classes are different ones.
Later in Section <ref>, we will discuss an example that separates classes of -algebras and -algebras (Proposition <ref>).
The next example based on an ordered group is from <cit.>.
(However, here we reverse the direction of the implication symbol of the original example in <cit.>.)
Take an ordered group (G,·,e,≤).
Let T be a set of elements constructed grammatically as follows:
t ::= g | t t' (g ∈ G).
That is, T is a set of binary trees whose leaves are labeled by elements of G.
We further define a function | | :T G by induction: |g| := g and |t_2 t_1| := |t_2| · |t_1|^-1.
Let be the powerset of { t ∈ T | e ≤ |t| }.
Then we can get a -algebra by :
* For M,N ∈, MN := { t_2 |∃ t_1 ∈ N, (t_2 t_1) ∈ M }.
* := { (t_3 t_1) (t_2 t_1) (t_3 t_2) | t_1,t_2,t_3 ∈ T }.
Here joins from the left.
* := { t_1 t_1 | t_1 ∈ T }
* := { t_1 (t_2 t_2) t_1 | t_1,t_2 ∈ T }.
* For M ∈, M := { t_2 (t_2 t_1) | t_1 ∈ M, t_2 ∈ T }.
This example is based on Comod(G) introduced in <cit.>, which is a category of sets and relations equipped with G valued functions.
For any (not necessarily ordered) group G, Comod(G) is a pivotal category.
is a set of maps from the unit object to a reflexive object in (ordered) Comod(G).
The structure of depends on G.
For instance,
{ (t_3 t_2 t_1) (t_3 t_1 t_2) | t_1, t_2, t_3 ∈ T }
acts as the -combinator whenever G is Abelian.
The above later appears several times as examples of applicative structures of other classes (Example <ref>, <ref>, <ref>).
When is a -algebra, and are closed categories.
Let :=.
We give the same bi-functor (- -):^op× as in the proof of Proposition <ref>.
* We define the unit object I as ({∗}, _I), where ∗_I := {}.
* j_X is the function sending ∗ to id_X, which is realized by .
* i_X is the function sending (f:∗↦ x) to x, which is realized by . The inverse i_X^-1 is realized by .
* L_Y,Z^X is the function sending g to the function (f ↦ g ∘ f), which is realized by .
* γ is invertible. Indeed, γ^-1 is the function sending g:I (Y X) to the map g(∗):X Y.
It is easy to verify that j, i and L have naturality and satisfy the axioms of the closed category.
For , we can use the same proof for .
While we define -algebras as a class of total applicative structures, we also can define “partial -algebra” naturally.
For a partial -algebra , satisfies that ∀ a ∈, a ↓ and a = a.
Proposition <ref> also holds in the case of partial -algebras.
§.§ -algebras and monoidal closed categories
In the previous two subsections, we obtain closed multicategories and closed categories as categories of assemblies.
Next we further attempt to obtain a richer categorical structure, the (non-symmetric) tensor products, by categorical realizability.
First, let us consider whether we can realize products by a -algebra in the same way as PCAs and -algebras.
Even when we use a -algebra, we can take the object X ⊗ Y in the same way as PCAs and -algebras (See the proofs of Proposition <ref> and <ref>).
That is, for a -algebra , we take an assembly X ⊗ Y that the underlying set is |X| × |Y| and realizers are
x ⊗ y_X ⊗ Y := { t.tp q | p ∈x_X, q ∈y_Y }.
We also take the unit object in in the same way as -algebras: |I|:= {∗} and ∗_I := {}.
Then is this a monoidal category?
Now let us assume it is.
Take an assembly
A:= (,_A), where a_A := { a }.
Then since is a monoidal category, the unitor
A I ⊗ A has a realizer r, which satisfies that r a = t.t a.
Taking an elements C := xyz.r x ( ( ( w.r y ( w))))z, this C satisfies the axiom of the -combinator and make a -algebra.
In summary, when we attempt to make a non-symmetric monoidal category using a -algebra , it follows that is actually a -algebra and becomes an SMCC.
Therefore, we need some major modification on the definition of realizers of tensor products in to make a non-symmetric monoidal category.
One way to solve this problem is supposing a combinator expressing the “pairing" operation.
And we define realizers for tensor products as
x ⊗ y_X ⊗ Y := { pq | p ∈x_X, q ∈y_Y }.
Since pq itself cannot separate the data of p and q from pq, we need another combinator to decompose pq.
A -algebra is a -algebra which contains and such that ∀ x, y, z ∈, x ( y z) = x y z.
A fundamental example of -algebras is given as the untyped planar lambda calculus with tensor products.
Add the following term construction rules to the planar lambda calculus (Example <ref>).
Γ⊢ M
Δ⊢ N
(pair construction)
Γ , Δ⊢ M ⊗ N
Γ⊢ M
Δ, x, y ⊢ N
(pair deconstruction)
Δ, Γ⊢x ⊗ yMN
We define a relation ∼ on planar terms as the congruence of the following relations.
* (λ x.M)N ∼ M[N/x]
* M ∼λ x.Mx
* (x_1 ⊗ x_2M_1 ⊗ M_2N) ∼ N[M_1 /x_1][M_2 /x_2]
* M ∼ (x ⊗ yMx ⊗ y)
Let the equational relation be the reflexive, symmetric and transitive closure of ∼.
Closed terms modulo form a -algebra, which we call in this paper.
Here λ xyz.x(yz), λ tu.(x ⊗ yutxy) and λ xy. (x ⊗ y) are the representatives of , and respectively.
Unlike the planar lambda calculus of Example <ref> (that does not have tensor products) does not need the η-equality to be a -algebra, the planar lambda calculus with tensor products of Example <ref> needs the βη-equality to use λ xyz.x(yz) as .
Indeed, (λ xyz.x(yz)) ((λ u.u) ⊗ (λ v.v)) (λ w.w) is βη-equal to (λ u.u) ⊗ (λ v.v) but not β-equal to it.
When constructing linear lambda terms with tensor products, we often suppose a constant ⋆ for the unit ( <cit.>). For the above example, we can add the following rules to the term construction rules.
(star introduction)
⊢⋆
⊢ M
Γ⊢ N
(star elimination)
Γ⊢⋆MN
However, for our aim that constructing monoidal categories by categorical realizability, this ⋆ is not needed since we can use as the realizer of the unit instead of ⋆.
-algebras correspond to the lambda calculus with tensor products, which has components other than applications, unlike the ordinary/linear/planar lambda calculus.
Thus, we cannot state the combinatory completeness property for -algebras in the same way we have seen in previous sections.
Here we only show the special case of the combinatory completeness property for -algebras.
Any closed term M in is βη-equivalent to some term M that is constructed from := λ xyz.x(yz), := λ x.x, := λ tu.(x ⊗ yutxy) and := λ xy. x ⊗ y using the application and the unary operation ():M ↦λ x.xM.
We inductively define the function .
* x := x
* MN := M N
* M ⊗ N := M N
* x ⊗ yMN := λ xy.N M
* λ xy.M := λ x. λ y.M
* λ x.x :=
* λ x.MN := N λ x.M (x ∈ FV(M))
M λ x.N (x ∈ FV(N))
* λ x.M ⊗ N := N (λ x.M ) (x ∈ FV(M))
( M ) λ x.N (x ∈ FV(N))
* λ x.(y ⊗ zMN) := (λ yz.N ) λ x.M (x ∈ FV(M))
M (λ xyz.N ) (x ∈ FV(N))
It is easy to see that M M for any closed term M.
Next we give an example of -algebra similar to Example <ref>.
Take an ordered group (G,·,e,≤).
Let T' be a set whose elements are constructed grammatically as follows:
t ::= g | t t' | t ⊗ t' (g ∈ G).
That is, T' is a set of binary trees whose leaves are labeled by elements of G, and whose nodes are two colored by and ⊗.
We further define a function | | :T' G by induction: |g| := g, |t_2 t_1| := |t_2| · |t_1|^-1 and |t_1 ⊗ t_2| := |t_1| · |t_2|.
Let |'| be the powerset of { t ∈ T' | e ≤ |t| }.
Then we can get a -algebra ' by |'|:
* For M,N ∈ |'|, MN := { t_2 |∃ t_1 ∈ N, (t_2 t_1) ∈ M }.
* := { (t_3 t_1) (t_2 t_1) (t_3 t_2) | t_1,t_2,t_3 ∈ T' }.
* := { t_1 t_1 | t_1 ∈ T' }.
* := { t_1 (t_2 t_2) t_1 | t_1,t_2 ∈ T' }.
* := { t_3 (t_1 ⊗ t_2) (t_3 t_2 t_1) | t_1,t_2,t_3 ∈ T' }.
* := { (t_1 ⊗ t_2) t_2 t_1 | t_1,t_2 ∈ T' }.
* For M ∈ |'|, M := { t_2 (t_2 t_1) | t_1 ∈ M, t_2 ∈ T' }.
In the above example, we prepare ⊗ in the construction of T' to express and .
However, in fact, in Example <ref> is already a -algebra even without ⊗.
In of Example <ref>, we have -combinator and -combinator as
* := { t_1 (e t_2) t_2 t_1 | t_1,t_2 ∈ T };
* := { t_3 (t_1 (e t_2)) (t_3 t_2 t_1) | t_1,t_2,t_3 ∈ T }.
This is a less standard example in that uses t_1 (e t_2) as the role of t_1 ⊗ t_2.
For another example, as well as we can construct an LCA (and the based -algebra) from a “reflexive object” (See <cit.> and <cit.>), we can get -algebras by appropriate settings.
Let (,⊗, I) be a monoidal closed category and
Φ:(- ⊗ X, -) (-,- X) be the adjunction.
Suppose an object V that has:
* an isomorphism r:(V V) V and s := r^-1;
* a retraction t: (V ⊗ V) ◃ V:u, that is, maps t:V ⊗ V V and u:V V ⊗ V such that u ∘ t =id_V ⊗ V.
Then the set of maps (I,V) is a -algebra.
* For maps M,N:I V, the application is defined as
I unitor I ⊗ I (s ∘ M) ⊗ N (V V) ⊗ V ev V.
* Take a map f:(V ⊗ V) ⊗ V V as
(V ⊗ V) ⊗ V associator V ⊗ (V ⊗ V) s ⊗ (ev ∘ (s ⊗ id)) (V V) ⊗ V ev V.
The -combinator is given as r ∘Φ (r ∘Φ (r ∘Φ(f)) ∘λ_V), where λ_V:I ⊗ V V is the unitor.
* The -combinator is r ∘Φ(λ_V).
* The -combinator given above satisfies the axiom of the -combinator.
Here we use
r ∘ s =id_V, and thus we need to assume r is an isomorphism (not merely a retraction).
* Take a map g:V ⊗ V V as
V ⊗ V s ⊗ u (V V) ⊗ (V ⊗ V) ev ∘ ( associator) V ⊗ V ev ∘ (s ⊗ id) V.
The -combinator is r ∘Φ (r ∘Φ(g) ∘λ_V).
* The -combinator is r ∘Φ (r ∘Φ(t) ∘λ_V).
* Given arbitrary M:I V, M is r ∘Φ(ev ∘ (s ⊗ M) ∘ρ_V ∘λ_V). Here ρ_V :V V ⊗ I is the unitor.
We will use the above -algebra later in the last of Section <ref>.
Next we show that -algebras induce monoidal closed categories.
When is a -algebra, is a monoidal closed category.
Since is also a -algebra, we can use the combinatory completeness for the planar lambda calculus.
* For objects X and Y, the underlying set of X ⊗ Y is |X| × |Y|. Realizers are defined as
x ⊗ y_X ⊗ Y := { p q | p ∈x_X, q ∈y_Y }.
* For f: X X' and g:Y Y', the map f ⊗ g is the function sending x ⊗ y to f(x) ⊗ g(y).
A realizer for f ⊗ g is ( pq. (r_f p)(r_g q)).
* The underlying set of the unit object I is a singleton {∗}. The realizer is ∗_I := {}.
* The left unitor λ_X: I ⊗ X X sends ∗⊗ x to x, whose realizer is .
A realizer of λ_X^-1 is .
* The right unitor ρ_X: X X ⊗ I sends x to x ⊗∗, whose realizer is p. p.
A realizer of ρ_X^-1 is .
* The associator α_XYZ:(X ⊗ Y) ⊗ Z X ⊗ (Y ⊗ Z) sends (x ⊗ y) ⊗ z to x ⊗ (y ⊗ z).
A realizer of α_XYZ is ( ( pqr. p ( qr))).
A realizer of α_XYZ^-1 is ( pu. (M p) u), where M := pqr. ( pq) r.
* For objects X and Y, the underlying set of Y X is _(X,Y). Realizers are defined as
f_Y X := { r |}.
* For f: X' X and g:Y Y', g f is the function sending a map h : X Y to g ∘ h ∘ f : X' Y'.
A realizer for g f is uv. r_g (u (r_f v)).
* The evaluation map ev :(Y X) ⊗ X Y sends f ⊗ x to f(x), which is realized by .
* For any map f:Z ⊗ X Y, there exists a unique map g:Z (Y X) which satisfies
ev ∘ (g ⊗ id_X) = f. This g is given as the function sending z to the function x ↦ f(z ⊗ x), which is realized by rp. r_f ( rp).
Similar to the case of -algebras (Proposition <ref> and <ref>), we cannot use the same proof of Proposition <ref> to the case of .
We prove that on a -algebra is a monoidal closed category by the same modification used in the proof of Proposition <ref>.
That is, we take the inclusion functor G:↪ and the left adjoint F:, and define the tensor product ⊠ in as X ⊠ Y := F(GX ⊗ GY).
When is a -algebra, is a monoidal closed category.
For functors given by applicative morphisms between -algebras, the next properties hold.
Let _1 and _2 be -algebras and γ :_1 _2 is an applicative morphism. Then :_1_2 is a lax monoidal functor.
A realizer for I_2 (I_1) is in the set u.u (γ(_1)).
A realizer for ( X) ⊗_2 ( Y) (X ⊗_1 Y) is in _2 ( pq.r_γ (r_γ (γ_1) p) q).
For -algebras _1 and _2 and an adjoint pair
(δ⊣γ) : _1 _2, the adjunction (⊣):_1_2 is monoidal.
We show that the left adjoint is strong monoidal.
Since is lax monoidal by the previous proposition, it is sufficient to show that there are realizers for maps I_2 I_1 and (X ⊗_2 Y) X ⊗_1 Y.
A realizer for the former is
x. (r_δ (δ ( y.y (γ_1))) x).
A realizer for the latter is
z. (r_δ (δ ( ( uv.r_γ (r_γ (γ) ( u) ) ( v)))) z).
Here ∈ |_1| is an element such that ∀ x ∈ |_1|, (δ (γ x)) =x and ∈ |_2| is an element such that ∀ y ∈ |_2|, y =γ (δ y), that are obtained by the assumption that γ and δ form an adjoint pair.
§.§ Bi--algebras and monoidal bi-closed categories
Let us consider once again why non-symmetric tensor products in categories of assemblies cannot be constructed from -algebras,
from the viewpoint of the “polymorphic encoding.”
In the second-order linear logic, a tensor product X ⊗ Y can be interpreted as
∀α . (X Y α) α. (This interpretation is seen in <cit.>, for instance.)
This formula (X Y α) α corresponds to the type inhabited by λ t.txy in the typed linear lambda calculus.
This correspondence connected to that (in a PCA or a -algebra,) a realizer of x⊗ y ∈ |X ⊗ Y| is t.tp q for p ∈x_X and q ∈y_Y.
What matters here is that the interpretation X ⊗ Y ≅∀α . (X Y α) α holds only when the tensor product is symmetric.
Whereas, for the non-symmetric cases, X ⊗ Y is expressed as ∀α. (α Y X) α or ∀α. α (Y X α).
Here we need to distinguish two sorts of implications and .
In an applicative structure like a -algebra, we cannot distinguish them since we only have one sort of application.
Conversely, providing some structure in an applicative structure that allows to distinguish these two implications, we may be able to construct non-symmetric tensor products in .
From this viewpoint, we introduced bi--algebras in <cit.>.
In this subsection, we recall bi--algebras from <cit.>.
First we recall a variant of the lambda calculus, which is an example of an applicative structure with two sorts of applications.
Bi-planar lambda terms are constructed by the following rules:
(identity)
x ⊢ x
Γ, x ⊢ M
(right abstraction)
Γ⊢xM
x, Γ⊢ M
(left abstraction)
Γ⊢xM
Γ⊢ M
Δ⊢ N
(right application)
Γ, Δ⊢ M N
Δ⊢ N
Γ⊢ M
(left application)
Δ, Γ⊢ N M
Note that here is none of weakening, contraction nor exchange rules.
For the sake of clarity, we will classify right and left by red and blue color.
That is, we write each of them as M N, xM, N M and xM.
We define a relation _β on bi-planar lambda terms as the congruence of the following relations:
* (right β-reduction) xM N _β M[N/x]
* (left β-reduction) N xM_β M[N/x]
The bi-planar lambda calculus consists of bi-planar lambda terms and the reflexive, symmetric and transitive closure of _β as the equational relation .
Basic properties about the β-reduction _β, such as the confluence and the strongly normalizing property, can be shown in the same way as the proof for the linear lambda calculus.
The bi-planar lambda calculus is not essentially a new concept, since it often appears as the Curry-Howard corresponding calculus with the Lambek calculus ( <cit.>).
However, note that unlike the calculus corresponding to the Lambek calculus, the bi-planar lambda calculus is based on untyped setting.
The reason why we use a less-standard notation is to shorten the length of terms and to make them easier to read.
Then we define a class of applicative structures which we call bi--algebras.
A total applicative structure =(,) is a bi--algebra iff there is an additional total binary operation on and contains several special elements:
* ∈ such that ∀ x,y,z ∈, (( x) y) z = x (y z).
* ∈ such that ∀ x,y,z ∈, z (y (x )) = (z y) x.
* ∈ such that ∀ x,y,z ∈, x (( y) z) = (x y) z.
* ∈ such that ∀ x,y,z ∈, (z (y )) x = z (y x).
* ∈ such that ∀ x ∈, x = x.
* ∈ such that ∀ x ∈, x = x.
* For each a ∈, a∈ such that ∀ x ∈, (a) x = x a.
* For each a ∈, a∈ such that ∀ x ∈, x (a) = a x.
We call and as right application and left application respectively.
We often write
= (,,) for a bi--algebra =(,) with the left application .
In the sequel, we use as a left-associative operation and often omit unnecessary parentheses, while we do not omit parentheses for .
For instance, (u v w) ((x y) z) denotes ((u v) w) ((x y) z).
The definition of bi--algebras is intended having a good correspondence with the bi-planar lambda calculus.
Untyped closed bi-planar lambda terms modulo form a bi--algebra, which we call in this paper.
We give a few examples of representatives: xyzx (y z) represents ; yxzx (y z) represents ; xM x represents M.
Let = (,,) be a bi--algebra.
A polynomial over is defined as a syntactic expression generated by variables, elements of and the applications and .
For a polynomial M over and the rightmost variable x of M, if x appears exactly once in M, there exists a polynomial M' such that the free variables of M' are the free variables of M excluding x and M' a = M[a/x] for all a ∈. We write such M' as xM.
Also, for a polynomial N over and the leftmost variable y of N, if y appears exactly once in N, there exists a polynomial N' such that the free variables of N' are the free variables of N excluding y and a N' = N[a/y] for all a ∈. We write such N' as yN.
We define xM by induction on the structure of M.
* xx :=.
* xM N := ( N)xM (x ∈ FV(M))
M xN (x ∈ FV(N))
Note that in case x ∈ FV(M), N has no variables since x is the rightmost free variable in M N.
* xN M :=
N (xM) (x ∈ FV(M))
(M) xN (x ∈ FV(N))
Note that in case x ∈ FV(N), M has no variables since x is the rightmost free variable in N M.
The case of the left abstraction yN is given in the same way, with all the left and right constructs reversed.
Next we give another example of bi--algebra which is introduced in <cit.> and similar to Example <ref>.
Take an ordered group (G,·,e,≤).
Let T” be a set whose elements are constructed grammatically as follows:
t ::= g | t t' | t t' (g ∈ G).
That is, T” is a set of binary trees whose leaves are labeled by elements of G, and whose nodes are two colored by and .
We further define a function | | :T” G by induction: |g| := g, |t_2 t_1| := |t_2| · |t_1|^-1 and |t_1 t_2| := |t_1|^-1· |t_2|.
Let |”| be the powerset of { t ∈ T”| e ≤ |t| }.
Then we can get a bi--algebra ” by |”|:
* For M,N ∈ |”|, M N := { t_2 |∃ t_1 ∈ N, (t_2 t_1) ∈ M }.
* For M,N ∈ |”|, N M := { t_2 |∃ t_1 ∈ N, (t_1 t_2) ∈ M }.
* := { (t_3 t_1) (t_2 t_1) (t_3 t_2) | t_1,t_2,t_3 ∈ T”}, dual for .
* := { ((t_1 t_2) t_3) (t_1 (t_2 t_3)) | t_1,t_2,t_3 ∈ T”}, dual for .
* := { t_1 t_1 | t_1 ∈ T”}, dual for .
* For M ∈ |”|, M := { t_2 t_1 | (t_1 t_2) ∈ M }, dual for M.
In the above example, we prepare in the construction of T” to express the left application.
However, in fact, in Example <ref> is a bi--algebra even without preparing .
Let T be the same set in Example <ref>.
For t, t' ∈ T, we define t t' ∈ T as (e t) (e t').
Then is a bi--algebra, whose components are taken in the same way as Example <ref>.
Next we give some basic properties of bi--algebras.
* Any bi--algebra is also a -algebra.
* Any -algebra is also a bi--algebra whose left and right applications coincide.
* When =(,) is a bi--algebra, the left application is unique up to isomorphism. That is, when both (,,_1) and (,,_2) are bi--algebras, _1 = (,_1) and _2 = (,_2) are isomorphic as applicative structures, where x _i y := y _i x.
* Let = (,,) be a bi--algebra and take an applicative structure
' := (,') by x ' y := y x.
Then is a -algebra iff ' is a -algebra.
Moreover, in such a case, and ' are isomorphic as applicative structures.
* , , , , and a are given as , , xyx (y ), xyx y,
xytt x y and xx a respectively.
* For a -algebra (, ), (, , ) is a bi--algebra when we take y x := x y. Here = :=, = :=, = := and a = a := a.
* By the combinatory completeness of _2, we have L := yxy _2 x such that L y x = y _2 x = x _2 y.
By the combinatory completeness of _1, we have an element
r := xyL y x, which satisfies r _1 x _1 y = L y x = x _2 y.
This r realizes the applicative morphism i_1 : _1 _2 given as the identity function on .
Similarly we have the inverse applicative morphism i_2 : _2 _1 given as the identity function.
i_1 and i_2 are the isomorphisms between _1 and _2.
* Suppose that is a -algebra, that is, there is some element ∈ such that
x y z = x z y.
Take an element := xyz M z y x, where M := yzxy (z x).
, and make ' a -algebra.
Similarly, when we suppose ' is a -algebra, is also a -algebra.
Furthermore, when we suppose (and also ') is a -algebra, we have an element
r := yxy x, which realizes the applicative morphism i : ' given as the identity function.
Similarly we have the inverse applicative morphism i' : ' given as the identity function, and thus ≅'.
By (<ref>) and (<ref>) of the above proposition, the class of bi--algebras is the class of applicative structures in between -algebras and -algebras.
We named the “-combinator" of -algebras by the reason that it is represented as xyx y in a bi--algebra, that gives the “left" application of two arguments.
Although xyx y always acts as a -combinator in a bi--algebra, it is not the only way to take a -combinator.
Indeed, in Example <ref>, has a -combinator as
xyx y = ()
= { t_2 ((e t_1) (e t_2)) t_1 | t_1,t_2 ∈ T }
which is different from the -combinator taken in Example <ref>.
Since a bi--algebra is also a -algebra, we know that (and ) is a monoidal closed category.
Moreover, we can show that the categories of assemblies on bi--algebras are not just a monoidal closed categories, but are monoidal bi-closed categories, having richer categorical structures.
A monoidal bi-closed category is a monoidal category with two sorts of adjunction (X ⊗ Y,Z) ≅(X,Z Y) and (X ⊗ Y,Z) ≅(Y,X Z).
When =(,) is a bi--algebra, is a monoidal bi-closed category.
Let be the left application of .
* A realizer for identities is .
* A realizer for the composition of f:X Y and g:Y Z is r_g r_f.
* For objects X and Y, the underlying set of X ⊗ Y is |X| × |Y|. Realizers are defined as
x ⊗ y := {tt p q| p ∈x_X, q ∈y_Y }.
* For f: X X' and g:Y Y', the map f ⊗ g is the function sending x ⊗ y to f(x) ⊗ g(y).
A realizer for f ⊗ g is upqtt (r_f p)
(r_g q) u.
* The underlying set of the unit object I is a singleton {∗}. The realizer is ∗_I := {}.
* The left unitor λ_X: I ⊗ X X sends ∗⊗ x to x, whose realizer is p p.
A realizer of λ_X^-1 is ptt p.
* The right unitor ρ_X: X X ⊗ I sends x to x ⊗∗, whose realizer is ptt p.
A realizer of ρ_X^-1 is upvp (v ) u.
* The associator α_XYZ:(X ⊗ Y) ⊗ Z X ⊗ (Y ⊗ Z) sends (x ⊗ y) ⊗ z to x ⊗ (y ⊗ z).
α_XYZ is realized by uvM v u,
where
M := pqrtt p t't' q r.
A realizer of α_XYZ^-1 is upvqrN v u, where
N := t(t t't' p q) r.
* For objects X and Y, the underlying set of Y X is _(X,Y). Realizers are
f_Y X := { r |}.
* For f: X' X and g:Y Y', g f is the function sending a map h : X Y to the map g ∘ h ∘ f : X' Y'.
A realizer for g f is uvr_g (u (r_f v)).
* The evaluation map ev :(Y X) ⊗ X Y sends f ⊗ x to f(x), which is realized by u u.
* For any map f:Z ⊗ X Y, there exists a unique map g:Z (Y X) which satisfies
ev ∘ (g ⊗ id_X) = f. This g is given as the function sending z to the function x ↦ f(z ⊗ x), which is realized by qpr_f tt q p.
* For objects X and Y, the underlying set of X Y is is _(X,Y). Realizers are
f_X Y := { r |}.
This set is not empty since (r_f) is in the set for a realizer r_f of f.
* For f: X' X and g:Y Y', f g is the function sending a map h : X Y to the map g ∘ h ∘ f : X' Y'. A realizer for f g is uvr_g ((r_f v) u).
* The evaluation map ev' : X ⊗ (X Y) Y sends x ⊗ f to f(x), which is realized by upvp v u.
* For any map f:X ⊗ Z Y, there exists a unique map g:Z (X Y) which satisfies
ev' ∘ (id_X ⊗ g) = f. This g is given as the function sending z to the function x ↦ f(x ⊗ z), which is realized by qpr_f tt p q.
For the category of modest sets, we use the same discussion as Proposition <ref>.
That is, for the functor F that is left adjoint of the inclusion functor G:, we define tensor products ⊠ in as X ⊠ Y := F(GX ⊗ GY).
When =(,) is a bi--algebra, is a monoidal bi-closed category.
In Proposition <ref>, is the category of assemblies on the applicative structure (, ).
Even if we employ the left application to construct the category of assemblies, we can obtain a category with the same structures as , as the next proposition says.
Let =(,,) be a bi--algebra.
When we take an applicative structure '=(,') by x ' y := y x, and are isomorphic as categories.
Moreover, is monoidally isomorphic to with the reversed tensor products.
That is, there is an isomorphism R: such that R(I) ≅ I', R^-1(I') ≅ I,
R(X ⊗ Y) ≅ RY ⊗' RX and R^-1(X' ⊗' Y') ≅ R^-1Y' ⊗ R^-1X' hold.
For a map f:X Y in , the map is also a map in since the realizer exists as r_f.
Therefore, we can take a functor R: which sends objects to the same objects and maps to the same maps.
Similarly we can get R^-1 which sends objects to the same objects and maps to the same maps.
' is a bi--algebra by taking the left application x ' y := y x.
We define the monoidal structure (⊗',I') on in the same way as Proposition <ref>.
Here the realizers for tensor products are x ⊗' y_X ⊗' Y = {tq (p t)| p ∈x_X, q ∈y_Y }.
A realizer for R(I) I' is uu and a realizer for the inverse is u u.
A realizer for R(X ⊗ Y) RY ⊗' RX is upqtp (q t) u and a realizer for the inverse is uu qptt p q.
Similar for the realizers related to R^-1.
We can define “partial bi--algebras” naturally.
Similar to partial -algebras discussed in Remark <ref>, for a partial bi--algebra :
* is not generally a monoidal bi-closed category;
* adding an extra element , naturally extends to a total bi--algebra _;
* is the full subcategory of _.
Here the does not need to be two (for and for ), just one.
§ SEPARATION OF CLASSES OF APPLICATIVE STRCTURES
As we have already mentioned, the classes of applicative structures in this paper form a hierarchy summarized in the following table (Table <ref>).
However, we have not yet shown the strictness of the hierarchy.
To show the strictness of the each inclusion, it is sufficient to provide an applicative structure separating the classes, that is, an applicative structure belonging to one side of the class but not belonging to the other.
In this section we give several such applicative structures, as summarized in Table <ref>.
§.§ Proofs of separations
First we show that the planar lambda calculus with a constant separates -algebras and -algebras.
Suppose a constant symbol c and add the following constant rule to the construction rules of planar lambda terms (See Example <ref> and <ref>).
(constant)
⊢ c
We assume no additional reduction rules about the constant.
That is, for instance, c (λ x.x) c has no redex.
Closed planar terms (which may contain c) modulo form a -algebra, which we call .
Even adding the constant c, the planar lambda calculus still has the properties of confluence and strongly normalizing.
is a -algebra but not a -algebra.
Hence,
-algebras ⊊ -algebras.
Assume that is a -algebra.
That is, assume there exist terms I and in such that I M M and M I M for any term M in .
We take I and as β-normal terms w.l.o.g.
If M N in , the number of appearance of c is equal between M and N.
Thus, since c I c, I and cannot contain c.
* When c is β-normal, c I is also β-normal and obviously not equal to c.
This contradicts to the confluence of the planar lambda calculus (with constant c).
* When = λ u.J for some J and u, c I (J[c/u]) I.
* When J = λ v.J' for some J' and v, c I J'[c/u][I/v].
Suppose v receives just n arguments N_1,… ,N_n (n ≥ 0) in J'.
J' = C[v N_1 … N_n] for some context C[-] which contains u to the left of the hole [-].
For the β-normal form N of I N_1 … N_n, c I (C[N])[c/u].
(C[N])[c/u] is β-normal and obviously not equal to c.
This contradicts to the confluence.
* Otherwise, J[c/u] I is β-normal and not equal to c.
This contradicts to the confluence.
Next we show that the planar lambda calculus additionally employing the η-equality separates -algebras and -algebras.
Suppose three constant symbols c_1, c_2 and c_3 and add the following constant rules (i=1,2,3) to the construction rules of planar lambda terms.
(constant)
⊢ c_i
We assume no additional reduction rules about the constants.
Closed planar terms (that may contain constants) modulo form a -algebra, which we call .
Note that the equivalence relation of is the βη-equality, while that of (Example <ref>) is the β-equality.
We have λ xyz.x(yz) as a representation of in .
Indeed, for any term M, (λ xyz.x(yz)) M (λ w.w) λ z.Mz =_η M.
is a -algebra but not a -algebra.
Hence,
-algebras ⊊ -algebras.
Assume that there are some terms L and P in satisfying that for any terms M_1, M_2 and M_3, L M_1 (P M_2 M_3) M_1 M_2 M_3.
Taking M_1 = M_2 = M_3 := λ x.x, we see that L and P cannot contain constants.
Taking M_i := c_i, we have L c_1 (P c_2 c_3) c_1 c_2 c_3.
Since L is a closed planar term with no constants, the βη-normal form of L is the form λ xy_1 … y_m.x N_1 … N_n (m,n ≥ 0).
Therefore, L c_1 (P c_2 c_3) (λ y_1… y_m.c_1 N_1 … N_n)(P c_2 c_3).
However, this term cannot be βη-equal to c_1 c_2 c_3 since c_1 cannot receive c_2 and c_3 as separated arguments no matter how the form of P is.
Next we show that the freely constructed -algebra separates -algebras and bi--algebras.
We take as the freely constructed -algebra with two constants c_1 and c_2.
That is, elements of are constructed from , , , , , c_1 and c_2 using the application and the unary operation ().
The equality in is obtained by the axioms of -algebras and we do not assume any axioms on the constants.
is a -algebra but not a bi--algebra.
Hence,
bi--algebras ⊊ -algebras.
Assume that is a bi--algebra and write the right and left applications as and . Here this is the same application as that of as a -algebra, that is, MN and M N denote the same element.
By the combinatory completeness, there is an element M := xyx y in .
Since M = holds, this M cannot contain c_1 nor c_2.
For this M, M c_1 c_2 = c_2 c_1.
As we can see from the axioms of , , , , and ( `- ),
it is impossible for M in any form to exchange the order of two arguments c_1 and c_2 in M c_1 c_2.
Then it is also impossible for c_2 in any form to reduce M c_1 c_2 to c_2 c_1.
Finally we show the bi-planar lambda calculus (Example <ref>) separates bi--algebras and -algebras.
is a bi--algebra but not a -algebra.
Hence,
-algebras ⊊ bi--algebras.
Assume that there is some closed bi-planar lambda term C in such that for any closed bi-planar term M, N and L, C M N L M L N.
Let C' be the β-normal form of C xx.
C' M N N M holds for any M and N.
Take M := xxyy and N := xxyyzz.
Note that for any β-normal term P and a free variable w of P, P[M/w] and P[N/w] are β-normal.
* When C' M is β-normal, both C' M N and N M are β-normal.
However, obviously C' M N N M and it contradicts to the confluence of the bi-planar lambda calculus.
* When C' = uC” for some C” and u, C' M N C”[M/u] N.
* When C” = vC”' for some C”' and v, C' M N C”'[M/u][N/v].
Since v is the rightmost free variable of C”', N is to the right of M in C”'[M/u][N/v].
Hence C”'[M/u][N/v] N M and it contradicts to the confluence.
* Otherwise, C”[M/u] N is β-normal.
C”[M/u] N N M and it contradicts to the confluence.
§.§ The planar lambda calculus is not a bi--algebra
Proofs of separations in the previous subsection are straightforward ones.
However, it is sometimes difficult to show that an applicative structure does not belong to certain class of applicative structures.
In this subsection, as an example, we will show that of Example <ref> (the planar lambda calculus with no constant) is not a bi--algebra.
Compared to propositions when constants exist (Proposition <ref> and <ref>), the proof is more tricky.
For any term M of , there is a term N of such that NM λ x.x.
Since planar lambda terms always have β-normal forms uniquely, we can assume M is β-normal w.l.o.g.
We show this lemma by the induction on the number of bound variables of M.
When BV(M) is a singleton, M is λ x.x and N:=λ x.x satisfies NM λ x.x.
Assuming that the lemma holds till the number of bound variables of M is k, we will show that the lemma holds for M which contains k+1 bound variables.
Since M is planar and β-normal, M = λ x y_1 … y_m .x P_1 … P_n for
some β-normal planar terms P_1, … , P_n. Here y_1 ,… , y_m are all the free variables of P_1 ,… , P_n.
Let Q_j be the term replacing all the y_i in P_j with λ z.z.
Each Q_j is a closed planar term and has at most k bound variables.
Hence, from the induction hypothesis, there exists some closed planar term R_j such that R_j Q_j λ x.x.
Take N' := λ w_1 … w_n.(R_1 w_1)… (R_n w_n) and N:= λ u.uN'(λ z_1.z_1)… (λ z_m.z_m).
Then N' and N are closed planar terms and
NM MN'(λ z_1.z_1)… (λ z_m.z_m)
= (λ x y_1 … y_m .x P_1 … P_n)N'(λ z_1.z_1)… (λ z_m.z_m)
N'Q_1 … Q_n
= (λ w_1 … w_n.(R_1 w_1)… (R_n w_n))Q_1 … Q_n
(R_1 Q_1)… (R_n Q_n)
(λ x.x)… (λ x.x)
λ x.x.
is not a -algebra.
Assume that there is a term T in such that TMN NM for any M and N in .
(Note that a total applicative structure containing and is a -algebra iff it has such that x y = y x.
Indeed, ( ( ( )))
satisfies the axiom of the -combinator.)
Take a term λ x y_1 … y_m.x P_1 … P_n as the β-normal form of T.
If n=0, T=λ x.x and it immediately leads contradiction.
Thus n ≥ 1.
Since T MN NM for any M and N,
TM TMT TMTT TMTTT ….
Let Q_j (j=1,… ,n) be the terms replacing all the y_i in P_j with T.
Each Q_j is a closed planar term.
Let U := λ x.x Q_1 … Q_n.
UM = (λ x.x Q_1 … Q_n)M
M Q_1 … Q_n
= (M P_1 … P_n)[T/y_1]… [T/y_m]
(λ x y_1 … y_m.x P_1 … P_n) M T … T
= TMT… T
TM.
Thus UMN (TM)N NM holds for any M and N.
From Lemma <ref>, there exist closed terms R_j (j=1,… ,n) such that R_j Q_j λ z.z.
Take M_0 := λ w_1 … w_n.(R_1 w_1)… (R_n w_n).
Then for any closed planar term N,
NM_0 UM_0 N
= (λ x.x Q_1 … Q_n)M_0 N
M_0 Q_1 … Q_n N
= (λ w_1 … w_n.(R_1 w_1)… (R_n w_n))Q_1 … Q_n N
(R_1 Q_1) … (R_n Q_n) N
(λ z.z)… (λ z.z) N
N.
Taking N_0 := λ x.x in N_0 M_0 N_0, we get M_0 = λ x.x.
Therefore, N (λ x.x) N holds for any closed planar term N.
However, N:= λ y.y(λ z.z) is the counterexample of this equation and it leads contradiction.
is a not a bi--algebra.
Assume that is a bi--algebra.
That is, taking as the application canonically obtained by the application of planar lambda terms, assume that there is some binary operation such that (||,,) becomes a bi--algebra.
This is the binary operation not on planar lambda terms, but on β-equivalence classes of planar lambda terms.
However, in the sequel, we denote a lambda term M indistinguishably to the equivalence class containing M.
For instance, for planar lambda terms M_1 and M_2, M_1 M_2 denotes some representation of M_1M_2, where M_i is the β-equivalence class containing M_i.
By the combinatory completeness for bi--algebras, there is a closed planar term
L representing xyx y.
Take a term λ x y_1 … y_m.x P_1 … P_n as the β-normal form of L.
For a term T representing xyx (y ), dividing to the cases of n=0 or not, we will show that T makes a -algebra and leads contradiction to Lemma <ref>.
If n=0, L=λ x.x and M N (L M) N M N holds for any M and N in .
Given arbitrary term N_0 in , take M := and N:= N_0 in M N M N.
Then we get
N_0
N_0.
For arbitrary M_0 and N_0 in ,
T M_0 N_0 = (xyx (y )) M_0 N_0
M_0 (N_0 )
M_0 N_0
N_0 M_0
holds.
Hence, T makes a -algebra and contradicts to Lemma <ref>.
Next is the case of n ≥ 1.
Since L M N M N for any M and N,
L M L M (L) L M (L) (L) L M (L) (L) (L) ….
Let Q_j be the term replacing all the y_i in P_j with L.
Each Q_j is a closed planar term.
Let
V:= λ x.x Q_1 … Q_n.
VM = (λ x.x Q_1 … Q_n)M
M Q_1 … Q_n
= (M P_1 … P_n)[L/y_1]… [L/y_m]
(λ x y_1 … y_m.x P_1 … P_n)M(L)… (L)
= LM(L)… (L)
LM.
Thus VMN LMN M N holds for any M and N.
From Lemma <ref>, there exists closed term R_j (j=1,… ,n) such that R_j Q_j λ z.z.
Take M_1 := λ w_1 … w_n.(R_1 w_1)… (R_n w_n).
Then for any closed planar term N,
M_1 N LM_1 N
= (λ x y_1 … y_m.x P_1 … P_n)M_1 N
M_1 Q_1 … Q_n N
= (λ w_1 … w_n.(R_1 w_1)… (R_n w_n))Q_1 … Q_n N
(R_1 Q_1) … (R_n Q_n) N
(λ z.z)… (λ z.z) N
N.
Taking N := in M_1 N N, we get M_1 =.
Therefore, N_1 N_1 holds for any closed planar term N_1.
Given arbitrary N_2 in , with N_1:= N_2, we get
N_2 N_2
N_2 .
For arbitrary M_2 and N_2 in ,
T M_2 N_2 = xyx (y ) M_2 N_2
M_2 (N_2 )
M_2 N_2
N_2 M_2
holds. Hence, T makes a -algebra and contradicts to Lemma <ref>.
We have already seen in Proposition <ref> that (the planar lambda calculus with constants) is not a -algebra.
However, whether is a -algebra is still open.
§.§ The computational lambda calculus
Next we consider the computational lambda calculus as an applicative structure that gives rise to non-symmetric structures.
The computational lambda calculus is a variant of the lambda calculus whose evaluation rules are sound for programs with computational effects <cit.>. The following axiomatization is from <cit.>.
Suppose infinite supply of variables x,y,z,….
Values, terms and evaluation contexts are defined as follows:
* (values) V ::= x | λ x.M
* (terms) M ::= V | MM'
* (evaluation contexts) E[] ::= [] | EM | VE
(Terms are the same ones of the ordinary lambda calculus in Example <ref>.)
An equivalence relation =_c on terms is defined as the congruence of the following equations:
* (β_V) (λ x.M)V =_c M[V/x]
* (η_V) λ x.Vx =_c V
* (β_Ω) (λ x.E[x])M =_c E[M]
Here E[M] denotes the term obtained by substituting M for [] in E[].
The (untyped) computational lambda calculus is the lambda calculus formed by terms and =_c.
In <cit.>, we showed that the computational lambda calculus is a -algebra but not a -algebra.
We can get a -algebra , whose underlying set is equivalence classes of lambda terms modulo =_c. (Note that terms of are not restricted to closed terms.)
Here λ xyz.x(yz), λ x.x, λ xy.yx and λ x.xM are representatives of , , and M respectively.
Although the computational lambda calculus has all terms of the lambda calculus, is not a PCA nor a -algebra.
This is reasonable considering that programs with effects cannot be discarded, duplicated nor exchanged in general, and thus cannot have the //-combinator.
Moreover, we can prove the next proposition.
is not a bi--algebra.
To prove this proposition, we use the CPS-translation <cit.>.
The CPS-translation sends terms of the computational lambda terms to terms of the ordinary lambda calculus and is defined inductively as follows.
* x := λ k.kx
* λ x.M := λ k.k(λ x.M)
* MN := λ k.M (λ f. N (λ x.fxk))
For any term M and N, M =_c N holds in the computational lambda calculus iff MN holds in the ordinary lambda calculus.
We will lead a contradiction by assuming is a bi--algebra.
If is a bi--algebra, we have a term L representing xyx y and a term M representing xM x for each term M.
For any terms M_1 and M_2, L M_1 (M_2) =_c M_2 M_1 holds, and thus L M_1 (M_2)M_2 M_1 holds.
Now we take a fresh variables v and let M_2 := vv.
Additionally we take a fresh variable (fresh for L, M_2 and M_2) u and let M_1 := uu.
Then
L M_1 (M_2) = λ k. (λ k'.L (λ f'.M_1(λ x'.f'x'k'))) (λ f.M_2(λ x.fxk))
λ k. L (λ f'.M_1 (λ x' .f'x'(λ f.M_2(λ x.fxk))))
λ k. L (λ f'.uu (λ x' .f'x'(λ f.M_2(λ x.fxk)))),
M_2 M_1 λ k.vv(λ f.uu(λ x.fxk)).
In M_2 M_1, vv receives the argument of the form (… uu … ).
However, since u and v are fresh, no matter what L is, in L M_1 (M_2), vv cannot receive arguments containing uu.
Hence these terms L M_1 (M_2) and M_2 M_1 cannot be βη-equal.
It leads a contradiction to the soundness of the CPS-translation.
Semantically, the untyped ordinary/linear/planar lambda calculus is modeled by a reflexive object of a CCC/SMCC/closed multicategory.
And it is related to the categorical structures of assemblies on each lambda calculus.
On the other hand, the untyped computational lambda calculus is modeled by a reflexive object of a Kleisli category.
Since the categorical structure of a Kleisli category is not monoidal in general but premonoidal (See <cit.>), it is expected that the category of assemblies on the untyped computational lambda calculus is not a monoidal category.
Thus the computational lambda calculus is expected not to be a -algebra inducing monoidal closed category, however, we have not proven this conjecture yet.
Here, we give an intuitive explanation for the conjecture.
Assume that and exist in the computational lambda calculus.
Take three non-values M_1, M_2 and M_3.
Suppose these terms are reduced to values: v_L; v_P; M_i v_i.
In M_1 M_2 M_3, the evaluation proceeds as follows:
$̄M_1is reduced tov_1⇝M_2is reduced tov_2⇝v_1 v_2is reduced⇝M_3is reduced tov_3⇝…
On the other hand, inM_1 (M_2 M_3), the evaluation proceeds as follows:
$̄is reduced tov_L⇝M_1is reduced tov_1⇝v_L v_1is reduced⇝is reduced tov_P⇝M_2is reduced tov_2⇝v_P v_2is reduced⇝M_3is reduced tov_3⇝…
These two computations seem not to coincide, since the order of the evaluations ofv_1 v_2andM_3is reversed.
§ NECESSARY CONDITIONS FOR INDUCING CLOSED STRUCTURES
We have seen that applicative structures of certain classes induce the corresponding categorical structures, in Proposition <ref> (CCCs), Proposition <ref> (SMCCs), Proposition <ref> (closed multicategories), Proposition <ref> (closed categories), Proposition <ref> (monoidal closed categories) and Proposition <ref> (monoidal bi-closed categories).
In this section, we show the certain “inverses” of these propositions hold.
Suppose is a total applicative structure and := happens to be a CCC.
is an -algebra if the following conditions hold.
* |Y^X| = _ (X,Y) and f_Y^X = { r |}.
* For f:X' X and g:Y Y', g^f : Y^X Y'^X' is the function sending h:X Y to g ∘ h ∘ f.
* The forgetful functor from to strictly preserves finite products.
* The adjunction Φ: _ (X × Y, Z) _ (X, Z^Y) is the function sending a function f to the function x ↦ (y ↦ f(x,y)).
Take an object A := (,_A), where a_A := { a }.
When we take as a realizer of id_A, this satisfies ∀ a ∈, a_A ⊆id_A (a)_A.
That is, ∀ a ∈, a = a.
Applying Φ to the first projection (a,a') ↦ a: A × A A,
we get a map k:A A^A, which sends a to (a' ↦ a).
(Here we use the conditions <ref>, <ref> and <ref> to clarify what the function k actually is.)
When we take as a realizer of k, this satisfies ∀ a,a' ∈, a a' = a.
Let ϕ : A A^A be the function sending a to the function x ↦ a x.
Here ϕ(a) is realized by a and ϕ is realized by .
Applying Φ twice to the map from ((A^A)^A × A^A) × A to A defined as
((A^A)^A × A^A) × A
id × diagonal ((A^A)^A × A^A) × (A × A)
symmetry ((A^A)^A × A) × (A^A × A) ev × ev A^A × A ev A,
we get a map s:(A^A)^A (A^A)^(A^A) which sends a function g:A A^A to the function
(f:A A) ↦ (a ↦ g(a) (f(a))).
The map
A ϕ A^A ϕ^id (A^A)^A s (A^A)^(A^A)id^ϕ (A^A)^A
is the function
a ↦ (a' ↦ (a”↦ a a” (a' a”))).
(Here we use the conditions <ref> to clarify what the functions ϕ^id and id^ϕ actually are.)
Thus, when we take as a realizer of this map, satisfies x y z = x z (y z) for any x,y,z ∈.
To rephrase the proposition, to obtain a CCC by categorical realizability, being an-algebra is the necessary condition on the total applicative structure (under several conditions).
We will show the similar propositions for the other classes.
Combining the propositions in this section and the separations in the previous section, we can say that, for instance, the category of assemblies on an applicative structure that is a bi--algebra but not a-algebra (,) is indeed non-symmetric monoidal (as long as we try to take the symmetry in the canonical way).
When we try to prove the proposition replacing “total applicative structure” with “partial applicative structure” in Proposition <ref>, we cannot use the same proof.
This is because ϕ : A A^A is not always defined.
Indeed, when a a' is not defined in , ϕ (a) is not defined at a'.
It is still unclear whether we can prove the similar proposition as Proposition <ref> when is a partial applicative structure.
Suppose is a total applicative structure and := happens to be an SMCC.
is a -algebra if the following conditions hold.
* |Y X| = _ (X,Y) and f_Y X = { r |}.
* g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f.
* The forgetful functor from to is a strict symmetric monoidal functor.
* The adjunction Φ: _ (X ⊗ Y, Z) _ (X, Z Y) is the function sending a function f to the function x ↦ (y ↦ f(x,y)).
Take an object A := (,_A), where a_A := { a }.
When we take as a realizer of id_A, this satisfies ∀ a ∈, a_A ⊆id_A (a)_A.
That is, ∀ a ∈, a = a.
Let ϕ : A (A A) be the function sending a to the function x ↦ a x.
Here ϕ(a) is realized by a and ϕ is realized by .
Applying Φ twice to the map
((A A) ⊗ (A A)) ⊗ A
(A A) ⊗ ((A A) ⊗ A)
(A A) ⊗ A
A,
we get a map l: (A A) ((A A) (A A)), which sends g:A A to the function
(f:A A) ↦ g ∘ f.
The map
A ϕ (A A) l ((A A) (A A)) id ϕ ((A A) A)
is the function a ↦ (a' ↦ (a”↦ a (a' a”))).
Thus, when we take as a realizer of this map, satisfies x y z = x (y z) for any x,y,z ∈.
Applying Φ to the map
A ⊗ (A A) symmetry (A A) ⊗ A ev A,
we get a map c:A (A (A A)), which sends a to (f ↦ f(a)).
The map
A c (A (A A)) id ϕ (A A)
is the function a ↦ (a' ↦ a' a).
Thus, when we take as a realizer of this map, satisfies x y = y x for any x,y ∈.
Let := ( ( ( ))). Then xyz=xzy holds for any
x,y,z ∈.
Suppose is a total applicative structure and := happens to be a closed multicategory.
is a -algebra if the following conditions hold.
* (;X) = |X| and (;X) = X.
* (X;Y) = _ (X,Y) and (X;Y) =(_ (X,Y), ).
Here
f= { r |}.
* (X_1,… ,X_n;Y) = (X_1; (X_2,… ,X_n;Y)) and (X_1,… ,X_n;Y) is the underlying set of (X_1,… ,X_n;Y).
* For g:Y_1,… ,Y_n Z and f_l:X^l_1,… ,X^l_k_l Y_l, g ∘ (f_1,… ,f_n) is the function sending x^1_1,… ,x^1_k_1,… ,x^n_k_n to g(f_1 (x^1_1,… ,x^1_k_1),… ,f_n (x^n_1,… ,x^n_k_n)).
When k_l = 0 for some 1 ≤ l ≤ n, g ∘ (f_1,… ,f_n) is the function given y_l ∈ |Y_l| pointed by f_l as the l-th argument of g.
* ev_X_1,… ,X_n;Y sends f, x_1,… ,x_n to f(x_1,… ,x_n).
* Λ_Z_1,… ,Z_m;X_1,… ,X_n;Y sends a function (z_1,… ,z_m,x_1,… ,x_n ↦ f(z_1,… ,z_m,x_1,… ,x_n)) to the function (z_1,… ,z_m ↦ f(z_1,… ,z_m,-,… ,-)).
Take an object A := (,_A), where a_A := { a }.
When we take as a realizer of id_A, this satisfies ∀ a ∈, a_A ⊆id_A (a)_A.
That is, ∀ a ∈, a = a.
Let ϕ : A (A;A) be the function sending a to the map x ↦ a x.
Here ϕ(a) is realized by a and ϕ is realized by .
Take a map
b:A,A,A id,ϕ,id A, (A;A),A ϕ,ev (A;A),A ev A,
which sends (x,y,z) to x (y z) for any x,y,z ∈.
When we take as a realizer of Λ_A;A;(A;A) (Λ_A,A;A;A (b)),
x y z = (Λ_A;A;(A;A) (Λ_A,A;A;A (b)))(x)(y)(z)
= b(x,y,z)
= x (y z).
Given arbitrary a ∈, take a map f_a:A A as
A id,a A,A ϕ,id (A;A),A ev A,
which sends x ∈ to x a.
When we take a as a realizer of f_a, a x = x a for any x ∈.
Suppose is a total applicative structure and := happens to be a closed category.
is a -algebra if the following conditions hold.
* |Y X| = _ (X,Y) and f_Y X = { r |}.
* g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f.
* i_X is the function sending a function (f:∗↦ x) to x.
* L_Y,Z^X is the function sending g:Y Z to the function (f:X Y) ↦ g ∘ f.
In the condition <ref>, we assume that the unit object is a singleton {∗}.
The assumption can be derived from the condition <ref>.
Take an object X := ({ x_1,x_2 } , _X) by x_i_X :=.
From the condition <ref>,
|X I| is _ (I,X).
Since _ (I,X) = _ (|I|,{ x_1,x_2 }), |X I| = _ (|I|,{ x_1,x_2 }).
Also since X I ≅ X, |X I| ≅ |X| = { x_1,x_2 }.
_ (|I|,{ x_1,x_2 }) ≅{ x_1,x_2 } holds iff |I| is the singleton.
Take an object A := (,_A), where a_A := { a }.
When we take as a realizer of id_A, this satisfies ∀ a ∈, a_A ⊆id_A (a)_A.
That is, ∀ a ∈, a = a.
Let ϕ : A (A A) be the function sending a to the function x ↦ a x.
Here ϕ(a) is realized by a and ϕ is realized by .
The map
A ϕ (A A) L ((A A) (A A)) id ϕ ((A A) A)
is the function a ↦ (a' ↦ (a”↦ a (a' a”))).
Thus, when we take as a realizer of this map, satisfies x y z = x (y z) for any x,y,z ∈.
Since I ≅ (I I) and ∈id_I_I I, we can assume ∈∗_I w.l.o.g.
When we take as a realizer of i_A^-1:A (A I), satisfies a x = a for any a ∈ and x ∈∗_I, especially, a =a holds.
Given arbitrary a ∈, let g_a:I A be the function ∗↦ a.
g_a is realized by a.
The map
A ϕ (A A) id g_a (A I) i_A A
is the function a' ↦ a' a.
Thus, when we take a as a realizer of this map, a satisfies a x = x a for any x ∈.
Suppose is a total applicative structure and := happens to be a monoidal closed category.
is a -algebra if the following conditions hold.
* |Y X| = _ (X,Y) and f_Y X = { r |}.
* g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f.
* The forgetful functor from to is a strict monoidal functor.
* The adjunction Φ: _ (X ⊗ Y, Z) _ (X, Z Y) is the function sending a function f to the function x ↦ (y ↦ f(x,y)).
Applying Φ twice to the map
((Y X) ⊗ (X Z)) ⊗ Z
(Y X) ⊗ ((X Z) ⊗ Z)
(Y X) ⊗ X Y,
we get a map L^X_Y,Z: (Y X) ((Y Z) (X Z)).
This L is the natural transformation L of the closed category .
Applying Φ to the unitor ρ_X :X ⊗ I X, we get a map i^-1_X : X (X I).
The inverse map is the natural isomorphism i of the closed category .
We can easily check that and satisfies all the conditions of Proposition <ref> for these L and i.
Hence, is a -algebra.
Take an object A := (,_A), where a_A := { a }.
Let ϕ: A (A A) be the function sending a to the function x ↦ ax.
Here ϕ (a) is realized by a and ϕ is realized by .
Let l: A (A (A ⊗ A)) be the map obtained by applying Φ to
A ⊗ (A ⊗ A) (A ⊗ A) ⊗ A ((A A) ⊗ A) ⊗ A
A ⊗ A (A A) ⊗ A A,
and let be a realizer of l.
l is the function sending x to the function (y,z) ↦ xyz.
Also let be a realizer of p := Φ(id_A ⊗ A) : A ((A ⊗ A) A).
p is the function sending y to the function z ↦ (y,z).
Then for any x,y,z ∈, x ( y z) ∈l(x)(p(y)(z))_A and thus
x ( y z) = l(x)(p(y)(z))
= l(x)(y,z)
= xyz.
The proof of the next proposition, for monoidal bi-closed categories and bi--algebras, is a little more complicated than the proofs of previous propositions.
When we obtain a monoidal bi-closed categoryby a bi--algebra,
we take realizers of elements of the objectX Yinas
f_X Y := { r ∈ || |}
(See the proof of Proposition <ref>).
However, in the next proposition we do not assume anything about the left application of, and thus we also cannot assume anything about realizers forX Y.
This makes the proof of existence forandcumbersome.
Suppose = (, ) is a total applicative structure and := happens to be a monoidal bi-closed category.
is a bi--algebra if the following conditions hold.
* |Y X| = _ (X,Y) and f_Y X = { r |}.
* g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f.
* The forgetful functor from to is a strict monoidal functor.
* The adjunction Φ: _ (X ⊗ Y, Z) _ (X, Z Y) is the function sending a function f to the function x ↦ (y ↦ f(x,y)).
* |X Y| = _ (X,Y).
* f g : (X Y) (X' Y') is the function sending h:X Y to g ∘ h ∘ f.
* The adjunction Φ': _ (X ⊗ Y, Z) _ (Y, X Z) is the function sending a function f to the function y ↦ (x ↦ f(x,y)).
The conditions of this proposition includes all the conditions of Proposition <ref>.
Hence, is a -algebra and have the combinatory completeness for the planar lambda calculus.
We take as the -combinator and as the -combinator of .
Take an object A := (,_A), where a_A := { a }.
Applying Φ to the evaluation map
ev_1:A ⊗ (A A) A, we get a map l:A (A (A A)), which sends a to (f ↦ f(a)).
Let _1 be a realizer of l and x y := _1 x y.
We will show that (,,) is a bi--algebra.
Let ϕ : A (A A) be the function sending a to the function (x ↦ a x).
Here ϕ(a) is realized by a and ϕ is realized by .
Given arbitrary a ∈, let a := x._1 x a.
For any x ∈,
a x = _1 x a
= x a.
Given arbitrary a ∈,
take a as an element of ϕ(a)_A A.
Then for any x ∈,
x a = _1 x a
= l(x)(ϕ(a))
= ϕ (a) (x)
= a x.
Furthermore, we can take as ().
Next we obtain .
Applying Φ' to
A ⊗ A ϕ(_1) ⊗ id A ⊗ A
ϕ⊗ id (A A) ⊗ A
ev A,
we get a map ϕ':A (A A), which sends a to (a' ↦ a' a).
Applying Φ' three times to
A ⊗ ((A A) ⊗ (A A))
associator (A ⊗ (A A)) ⊗ (A A)
ev ⊗ id A ⊗ (A A)
ev A,
we get a map p: I (A A) ((A A) (A A)).
Define a map b_1 as
I p (A A) ((A A) (A A)) ϕ' (ϕ' id) A (A (A A)),
which sends ∗ to x ↦ (y ↦ (z ↦ (z y) x)).
Take M_1 ∈b_1 (∗)_A (A (A A)).
Let _2 be a realizer of Φ (ev_2), where ev_2 :A ⊗ (A (A A)) (A A) is the evaluation map.
_2 realizes a map q:A (A A) that sends a to ϕ (_2 a).
Let _3 be a realizer of Φ (ev_3), where ev_3: A ⊗ (A (A (A A))) (A (A A)) is the evaluation map.
Take r:A A as a map sending x to _3 x M_1, whose realizer is x._3 x M_1.
Applying Φ' to
A ⊗ A q ⊗ r (A A) ⊗ A ev A,
we get a map b_2 : I (A (A A)), which sends ∗ to (x ↦ (y ↦_2 y (_3 x M_1))).
Take M_2 ∈b_2 (∗)_A (A A).
Let b_3:A A be a map sending x to _2 x M_2, whose realizer is x._2 x M_2.
When we take ∈b_3_A A, for any x ∈,
x = _1 x
= b_3 (x)
= _2 x M_2.
For any y ∈,
y (x ) = y (_2 x M_2)
= _1 y (_2 x M_2)
= b_2 (∗) (x) (y)
= _2 y (_3 x M_1).
For any z ∈,
z (y (x )) = z (_2 y (_3 x M_1))
= _1 z (_2 y (_3 x M_1))
= b_1(∗)(x)(y)(z)
= (z y) x.
Next we obtain .
Applying Φ' and Φ to
A ⊗ ((A (A A)) ⊗ A) associator (A ⊗ (A (A A))) ⊗ A ev ⊗ id (A A) ⊗ A ev A,
we get a map d:(A (A A)) ((A A) A), which sends a map (a ↦ (a' ↦ f(a,a'))) to the map (a' ↦ (a ↦ f(a,a'))).
When we take as a realizer of
A ϕ'
(A A) id ϕ
(A (A A)) d ((A A) A),
x ( y z)
= d(ϕ∘ (ϕ'(y)))(z)(x)
= (ϕ∘ (ϕ'(y)))(x)(z)
= (x y) z
for any x,y,z ∈.
Finally we obtain .
Applying Φ and Φ' to
(A ⊗ ((A A) A)) ⊗ A associator A ⊗ (((A A) A) ⊗ A) id ⊗ ev A ⊗ (A A) ev A,
we get a map d_1:((A A) A) (A (A A)), sending a map (a' ↦ (a ↦ f(a',a))) to the map (a ↦ (a' ↦ f(a',a))).
Take N_1 ∈d_1 ∘ (ϕ' id) ∘ϕ_A (A (A A)).
Let _4 be a realizer of Φ (ev_4), where ev_4: A ⊗ (A (A A)) (A A) is the evaluation map.
_4 realizes a map s:A (A A) sending x to ϕ (_4 x).
Let _5 be a realizer of a map obtained by applying Φ to
ev_5: A ⊗ (A (A (A A))) (A (A A))
and t:A A be a map sending a to _5 a N_1, whose realizer is x._5 x N_1.
Applying Φ' to
A ⊗ A s ⊗ t (A A) ⊗ A ev A,
we get a map d_2:A (A A) sending y to
(x ↦ (_4 x (_5 y N_1))).
Take a realizer N_2 ∈d_2_A (A A).
Let d_3:A A be a map sending x to _2 x N_2, whose realizer is x. _2 x N_2.
When we take ∈d_3_A A, for any y ∈,
y = _1 y
= d_3 (y)
= _2 y N_2.
For any x ∈,
x (y ) = x (_2 y N_2)
= _1 x (_2 y N_2)
= d_2 (y)(x)
= _4 x (_5 y N_1).
For any z ∈,
(x (y )) z = _4 x (_5 y N_1) z
= (d_1 ∘ (ϕ' id) ∘ϕ) (y)(x)(z)
= ((ϕ' ∘ (ϕ(y)))(z)(x)
= x (y z).
In this section we showed propositions for the necessary conditions to obtain certain structures on categories of assemblies.
Next, consider whether the similar propositions hold for the cases of categories of modest sets.
The next propositions can be proven in the same way as Proposition <ref>, <ref> and <ref>.
Suppose is a total applicative structure and := happens to be a CCC.
is an -algebra if the following conditions hold.
* |Y^X| = _ (X,Y) and f_Y^X = { r |}.
* For f:X' X and g:Y Y', g^f : Y^X Y'^X' is the function sending h:X Y to g ∘ h ∘ f.
* The forgetful functor from to strictly preserves finite products.
* The adjunction Φ: _ (X × Y, Z) _ (X, Z^Y) is the function sending a function f to the function x ↦ (y ↦ f(x,y)).
Suppose is a total applicative structure and := happens to be a closed multicategory.
is a -algebra if the following conditions hold.
* (;X) = |X| and (;X) = X.
* (X;Y) = _ (X,Y) and (X;Y) =(_ (X,Y), ), where
f= { r |}.
* (X_1,… ,X_n;Y) = (X_1; (X_2,… ,X_n;Y)) and (X_1,… ,X_n;Y) is the underlying set of (X_1,… ,X_n;Y).
* For g:Y_1,… ,Y_n Z and f_l:X^l_1,… ,X^l_k_l Y_l, g ∘ (f_1,… ,f_n) is the function sending x^1_1,… ,x^1_k_1,… ,x^n_k_n to g(f_1 (x^1_1,… ,x^1_k_1),… ,f_n (x^n_1,… ,x^n_k_n)).
When k_l = 0 for some 1 ≤ l ≤ n, g ∘ (f_1,… ,f_n) is the function given y_l ∈ |Y_l| pointed by f_l as the l-th argument of g.
* ev_X_1,… ,X_n;Y sends f, x_1,… ,x_n to f(x_1,… ,x_n).
* Λ_Z_1,… ,Z_m;X_1,… ,X_n;Y sends a function (z_1,… ,z_m,x_1,… ,x_n ↦ f(z_1,… ,z_m,x_1,… ,x_n)) to the function (z_1,… ,z_m ↦ f(z_1,… ,z_m,-,… ,-)).
Suppose is a total applicative structure and := happens to be a closed category.
is a -algebra if the following conditions hold.
* |Y X| = _ (X,Y) and f_Y X = { r |}.
* g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f.
* The underlying set of the unit object I is the singleton {∗}.
* i_X is the function sending a function (f:∗↦ x) to x.
* L_Y,Z^X is the function sending g:Y Z to the function (f:X Y) ↦ g ∘ f.
Here note that Proposition <ref> has one more condition, that the underlying set of the unit object is a singleton, than Proposition <ref>.
This is because the assemblyXwe used in Remark <ref> is not a modest set.
On the other hand, for the cases of SMCCs, monoidal closed categories and monoidal bi-closed categories, we cannot state propositions for modest sets similar to Proposition <ref>, <ref> and <ref>.
Since we define tensor products in categories of modest sets in the different way from those of categories of assemblies (as seen in the proof of Proposition <ref>), the condition “the forgetful functor fromtois strict monoidal" is not appropriate for the case of modest sets.
For the case of SMCCs, we can avoid this problem by presenting a more generalized proposition, that is for symmetric closed categories, instead of SMCCs.
A symmetric closed category is a closed category with a natural isomorphism
S_X,Y,Z : (Z Y) X ≅ (Z X) Y
satisfying appropriate axioms ( <cit.>).
Suppose is a total applicative structure and := (or ) happens to be a symmetric closed category.
is a -algebra if the following conditions hold.
* |Y X| = _ (X,Y) and f_Y X = { r |}.
* g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f.
* L_Y,Z^X is the function sending g:Y Z to the function (f:X Y) ↦ g ∘ f.
* S_X,Y,Z is the function sending f: x ↦ (y ↦ f(x)(y)) to S(f) : y ↦ (x ↦ f(y)(x)).
This proposition also shows that we cannot obtain(or) that is a symmetric closed category but not an SMCC, in the canonical way.
For the cases of monoidal closed categories and monoidal bi-closed categories, it is still not clear that there are any appropriate conditions to state propositions for modest sets similar to Proposition <ref> and <ref>.
§ PLANAR LINEAR COMBINATORY ALGEBRAS
In Section <ref>, we recalled LCAs and rLCAs, that relate-algebras and PCAs, and that induce categorical models of linear exponential modalities.
In this section, we apply the similar construction to-algebras.
We reformulate rLCAs for-algebras and PCAs, and call them exp-rPLCAs.
From an exp-rPLCA, we get a categorical model of!-modality on the non-symmetric multiplicative intuitionistic linear logic (MILL).
Also we reformulate rLCAs for-algebras and-algebras, and call them exch-rPLCAs.
From an exch-rPLCA, we obtain a model for an exchange modality relating the non-symmetric MILL and the symmetric MILL.
In <cit.>, we already introduced the same construction called “rPLCAs," based on bi--algebras.
What defined as rPLCAs in this section are generalizations of those in <cit.>, based on-algebras.
§.§ Exponential planar linear combinatory algebras
Linear exponential comonads on non-symmetric monoidal categories are investigated in <cit.>, which model!-modalities on non-symmetric MILL.
A linear exponential comonad on a monoidal category consists of the following data.
* A monoidal comonad (!, δ, ϵ, m, m_I). Here ! is an endofunctor on ,
δ_X : !X !!X and ϵ : !X X are monidal natural transformations for the comultiplication and the counit. A natural transformation m_X,Y : !X ⊗ !Y !(X ⊗ Y) and a map m_I : I !I make ! be a monoidal functor.
* Monoidal natural transformations e_X : !X I and d_X : !X !X ⊗ !X.
* A monidal natural transformation σ_X,Y : !X ⊗ !Y !Y ⊗ !X defined as
!X ⊗ !Y δ_X ⊗δ_Y !!X ⊗ !!Y m_!X,!Y !(!X ⊗ !Y) d_!X ⊗ !Y !(!X ⊗ !Y) ⊗ !(!X ⊗ !Y)
!(e_X ⊗ id) ⊗ !(id ⊗ e_Y) !(I ⊗ !Y) ⊗ !(!X ⊗ I) !( unitor) ⊗ !( unitor) !!Y ⊗ !!X ϵ_!Y⊗ϵ_!X !Y ⊗ !X.
Here these components need satisfy the following conditions.
* The following diagram commutes:
@C=30pt
!X ⊗ !X ⊗ !Y ⊗ !Y ⊗ !Z ⊗!Z [r]^id ⊗σ⊗ id[d]_id ⊗σ⊗ id
!X ⊗ !Y ⊗ !X ⊗ !Y ⊗ !Z ⊗ !Z [d]^m ⊗ m⊗ id
!X ⊗ !X ⊗ !Y ⊗ !Z ⊗ !Y ⊗ !Z [d]_id ⊗ m ⊗ m
!(X ⊗ Y) ⊗ !(X ⊗ Y) ⊗ !Z ⊗ !Z [d]^id ⊗σ⊗ id
!X ⊗ !X ⊗ !(Y ⊗ Z) ⊗ !(Y ⊗ Z) [d]_id ⊗σ⊗ id
!(X ⊗ Y) ⊗ !Z ⊗ !(X ⊗ Y) ⊗ !Z [d]^m ⊗ m
!X ⊗ !(Y ⊗ Z) ⊗ !X ⊗ !(Y ⊗ Z) [r]_m ⊗ m
!(X ⊗ Y ⊗ Z) ⊗ !(X ⊗ Y ⊗ Z)
* m_!Y ,!X∘σ_!X , !Y = !σ_X,Y∘ m_!X ,!Y.
* σ_X,Y^-1 = σ_Y,X.
* The following diagram commutes:
@C=50pt
!X ⊗ !Y ⊗ !Z [r]^δ_X ⊗δ_Y ⊗ id[d]_id ⊗σ_Y,Z
!!X ⊗ !!Y ⊗ !Z [r]^m_!X,!Y⊗ id
!(!X ⊗ !Y) ⊗ !Z [d]^σ_!X ⊗ !Y, Z
!X ⊗ !Z ⊗ !Y [rrd]_σ_X,Z⊗ id
!Z ⊗ !(!X ⊗ !Y) [d]^id ⊗ϵ_!X ⊗ !Y
!Z ⊗ !X ⊗ !Y
* The following diagram commutes:
@C=50pt
!X ⊗ !Y [r]^d_X ⊗ d_Y[d]_m_X,Y
!X ⊗ !X ⊗ !Y ⊗ !Y [r]^id ⊗σ⊗ id
!X ⊗ !Y ⊗ !X ⊗ !Y [d]^m ⊗ m
!(X ⊗ Y) [rr]_d_X ⊗ Y
!(X ⊗ Y) ⊗ !(X ⊗ Y)
* The following diagram commutes:
@C=50pt
I [rd]^m_I ⊗ m_I[d]_m_I
!I [r]_d_I !I ⊗ !I
* (!X, e_X,d_X) is a comonoid in .
* e_X and d_X are coalgebra morphisms.
* δ_X is a comonoid morphism.
Then we will introduce the categorical realizability to inducing linear exponential comonads on non-symmetric monoidal categories.
The results are reformulations of a part of contents in <cit.> and <cit.> to the case of-algebras.
An exponential relational planar linear combinatory algebra (exp-rPLCA) consists of a -algebra and a comonadic applicative morphism (,,) on which satisfies the followings.
* There is ∈ || such that x ( y) ⊆{ x } for any x , y ∈ ||.
* There is ∈ || such that x ( y) ⊆ x ( y) ( y) for any x , y ∈ ||.
While the above definition employs the different style from rLCAs of Definition <ref>, we can also define exp-rPLCAs in the same style.
For a -algebra and a comonadic applicative morphism (,,) on , the followings are equivalent.
* (,) is an exp-rPLCA.
* Take two total relations [,]: and k_i : as [,](x) := { a a' | a,a' ∈ x } and k_i (x) := {}.
Then they are applicative morphisms and ≼ [,] and ≼ k_i hold.
(1)⇒(2):
Realizers of [,] and k_i exist as pq. (r_ ( p)( q)) and .
Realizers for ≼ [,] and ≼ k_i are and .
(2)⇒(1):
Take a realizer r_1 of ≼ [,] and a realizer r_2 of ≼ k_i.
Then and exist as xy. x (r_2 y) and xy. x (r_1 y).
From an exp-rPLCA, we get a linear exponential comonad.
For an exp-rPLCA (, ), is a linear exponential comonad on .
* It is easy to see that the comultiplication δ and the counit ϵ are monoidal natural transformations.
From Proposition <ref>, the comonad is a lax monoidal functor and thus we have m_X,Y : X ⊗ Y (X ⊗ Y) and m_I : I I.
Therefore, we have as a monoidal comonad.
* e_X : X I is the function sending x to ∗. A realizer for e_X is .
* d_X : X X ⊗ X is the function sending x to x ⊗ x.
A realizer for d_X is ( pq. pq).
* It is easy to see that the (, e_X, d_X) satisfies conditions for linear exponential comonads.
Next we try to obtain linear-non-linear models for the non-symmetric MILL, that is, monoidal adjunctions between (non-symmetric) monoidal closed categories and CCCs.
Although now we get a linear exponential comonadon, at this point it has not concluded that we obtain a linear-non-linear model, since we have not shown that the co-Kleisli adjunction betweenandis a monoidal adjunction.
To show this, we use the next proposition shown in <cit.>.
Let be a monoidal closed category and ! be a linear exponential comonad on . When has finite products, the co-Kleisli category _! is a CCC and the co-Kleisli adjunction is monoidal.
For an exp-rPLCA (, ), has Cartesian products, and thus the co-Kleisli adjunction between and a CCC is monoidal.
* The terminal object is ({∗}, ), where ∗ := ||.
* The underlying set of X × Y is |X| × |Y|. Realizers are defined as
(x,y) := { ( uv) a | }.
The set of realizers is not empty since for m ∈x_X and m' ∈y_Y,
( ( ( m)) ( ( m'))) () ∈(x,y).
* For maps f:X X' and g :Y Y' in , f × g is the function sending (x,y) to (f(x),g(y)).
A realizer of f × g is
uv. ( (r_ M u) (r_ N v)), where M ∈ ( r_f) and
N ∈ ( r_g).
* A realizer for the projection π :X × Y X is
( ( uv. ( uv))).
A realizer for the projection π' :X × Y Y is
( ( uv. ( uv))).
* For any object Z and any maps f:Z X and g:Z Y, there exists a unique map
h:Z X × Y such that π∘ h = f and π' ∘ h =g.
h is the function sending z to (f(z),g(z)), whose realizer is in
( ( r_f) ( r_g)).
For an exp-rPLCA(,), we can restrict :to the comonad on, as we saw in Remark <ref>.
By the same proof as the above, we also can get a linear-non-linear model using.
For an exp-rPLCA (, ), is a linear exponential comonad on .
Moreover, the co-Kleisli adjunction between the monoidal closed category and the CCC _ is monoidal.
We have seen the co-Kleisli adjunctions obtained by an exp-rPLCA(, )are linear-non-linear models by showing thatandhave Cartesian products.
We can further show that these categories have better structures as the next proposition says.
For an exp-rPLCA (, ), and are finitely complete and finitely cocomplete.
First we show the proposition for .
* The terminal object and binary products are those in the proof of Proposition <ref>.
* Given maps f,g :X Y, let Z be an assembly defined as |Z| := { x ∈ |X| | f(x) = g(x) } and x_Z := x_X.
Take a map e :Z X as the inclusion function, realized by .
Then it is easy to see that this e is the equalizer of f and g.
* The initial object is the empty set.
* Given maps f,g :X Y, take a set |W| := Y/∼, where ∼ is the smallest equivalence relation satisfying ∀ x ∈ |X|, f(x) ∼ g(x).
Take an assembly W = (|W|,_W) by w_W := ⋃_y ∈ wy_Y.
Take a map e':Y W by the projection, realized by .
Then it is easy to see that this e' is the coequalizer of f and g.
* The underlying set of X+Y is { (0,x) | x ∈ |X| }∪{ (1,y) | y ∈ |Y| }.
Realizers are defined as
(0,x) := { mp | p ∈x_X }
and
(1,y) := { nq | q ∈y_Y },
where
m := uv. ( u)( v) and n := uv. u ( v).
The coprojections in_X:X X+Y and
in_Y :Y X+Y are given as x ↦ (0,x) and y ↦ (1,y), and realized by m and n respectively.
Given maps f:X Z and g:Y Z realized by r_f and r_g, we have a unique map h:X+Y Z such that h ∘ in_X = f and h ∘ in_Y = g.
h is the function sending (0,x) to f(x) and (1,y) to g(y), which is realized by ( uv.u( r_f) ( r_g) v).
Therefore, is finitely complete and finitely cocomplete.
Since is the reflexive full subcategory of , is also finitely complete and finitely cocomplete.
As an adjoint pair between a-algebra and a PCA gives rise to an rLCA, an adjoint pair between a-algebra and a PCA gives rise to an exp-rPLCA and a monidal adjunction.
Let (δ⊣γ): be an adjoint pair for a -algebra and a PCA .
* (, δ∘γ) forms an exp-rPLCA.
* (⊣): is a monoidal adjunction between the monoidal category and the Cartesian monoidal category .
* From Proposition <ref> (<ref>), δ∘γ is a comonadic applicative morphism.
Let and be elements for the counit and the comultiplication.
Then we can take ∈ as an element of xy. x ( (r_δ (δ M) y)), where M ∈ z.(γ).
Also we can take ∈ as an element of xy. x ( (r_δ (δ N) ( y))), where N ∈ z.r_γ (r_γ (γ) z) z.
* We show that the left adjoint is strong monoidal.
Let ∈ || and ∈ || be elements such that ∀ a ∈ ||, (δ (γ a)) =a and ∀ b ∈ ||, b ∈γ (δ b).
The map I 1 is realized by (δ).
The inverse 1 I is realized by a. (r_δ (δ ( b.(γ))) a).
The natural transformation ( X) ⊗ ( Y) (X × Y) is realized by
( aa'.r_δ (r_δ (δ ( bb't.tbb')) a) a').
The inverse map (X × Y) ( X) ⊗ ( Y) is realized by
u. (r_δ (δ M) u), where
M ∈ ( bb'.r_γ (r_γ (γ) ( b) ) ( b')).
Next we consider the functional case of exp-rPLCAs, like LCAs are the functional case of rLCAs.
An exponential planar linear combinatory algebra (exp-PLCA) is an exp-rPLCA (, ) that is functional.
Not only are exp-PLCAs special cases of exp-rPLCAs, but also can induce adjoint pairs between-algebras and PCAs.
Let (, ) be an exp-PLCA.
* We have a PCA _ = (, @) with x @ y := x ( y).
* Let γ : _ be the identity function and δ: _ be the function x ↦ x. Then γ and δ are applicative morphisms and δ⊣γ.
* We have the -combinator in _ as xy. ( xy).
We have the -combinator as xyz. (M x)(r_ (r_ ()( y))( z)), where
M:= xyz. x ( () ( y)) ( ( uv. r_ u ( v)) ( z)).
* Realizers of γ and δ are xy.x( y) and xy.r_ x( y). A realizer for δ∘γ≼ id_ is and for id__≼γ∘δ is .
Next we give an (functional) adjoint pair between a-algebra and a PCA.
This example is a reformulation of the linear lambda calculus with!( <cit.>) to a planar variant.
Suppose infinite supply of variables x,y,z,….
Terms are defined grammatically as follows.
M ::= x | MM' | λ x.M | M ⊗ M' | x ⊗ x'MM' | !M | λ !x.M
Here x of λ x.M is the rightmost free variable of M, appears exactly once in M and is not in any scope of !.
Also we assume that for x ⊗ x'MM', x' and x are the rightmost and the next rightmost free variables of N, appear exactly once in N and are not in any scope of !.
Take an equational relation on terms as the congruence of the following equational axioms.
* (λ x.M)N = M[N/x].
* M = λ x.Mx.
* (λ !x.M)(!N) = M[N/x].
* x ⊗ x'M ⊗ M'N = N[M/x][M'/x'].
* M= x ⊗ yMx ⊗ y.
Let Λ be the set of equivalence classes of closed terms.
Then we get a -algebra , whose underlying set is Λ and the application is that of lambda terms.
Also we get a PCA = (Λ,@), where M@N := M (!N).
Here the -combinator and the -combinator of exist as λ !x.λ !y.x and λ !x.λ !y.λ !z. x(!z) (!(y(!z))).
Take an applicative morphism γ: as the identity function whose realizer is λ !x. λ !y.xy.
Take δ : as a function M ↦ !M whose realizer is λ !x.λ !y. !(x(!y)). Then we have an adjoint pair δ⊣γ.
As well as we can construct an LCA from a “reflexive object” in a “weak linear category” (See <cit.> and <cit.> ), we can get exp-PLCAs by appropriate settings.
A weak planar linear category (WPLC) consists of:
* a monoidal closed category (,⊗,I) (not symmetric in general);
* a monoidal functor (!,m,m_I) on ;
* a monoidal pointwise natural transformation ! id_;
* a monoidal pointwise natural transformation ! !!;
* a monoidal pointwise natural transformation ! ! ⊗ !;
* a monoidal pointwise natural transformation ! K_I, where K_I is the constant I functor.
Here a pointwise natural transformationγ:F G is a family of maps γ_C :F(C) G(C)
(C ∈ Ob()) satisfying that G(f) ∘γ_I = γ_C ∘ F(f) for any f:I C.
To be a WPLC, we need not all of the conditions for linear exponential comonads (Definition <ref>).
For instance, a WPLC does not require that!is a comonad, and does not require the (ordinary) naturality of each transformation.
Let (,!) be a WPLC.
We say V is a reflexive object when there are:
* a retraction p: !V ◃ V :q;
* an isomorphism r:(V V) V and s := r^-1;
* a retraction t: (V ⊗ V) ◃ V:u.
As we saw in Example <ref>, for a reflexive objectVof a WPLC, := (I,V)forms a-algebra.
Furthermore, by givingas an endofunction sendingM:I Vtop ∘ (!M) ∘ m_I,(,)becomes an exp-PLCA.
The proof is the same as for WLCs and LCAs in <cit.>.
§.§ Exchange planar linear combinatory algebras
Exchange modalities on the Lambek calculus and their categorical models are introduced in <cit.>.
While the word “Lambek calculus" may indicate various logics, type systems or grammars ( <cit.>), here we call the Lambek calculus as a variant of non-symmetric MILL with left and right implications.
The Lambek calculus is modeled by monoidal bi-closed categories.
While the order of arguments cannot be exchanged in the Lambek calculus, the Lambek calculus can be extended to a sequent calculus that allows swapping arguments with modalities.
This sequent calculus is called the commutative/non-commutative (CNC) logic, that is composed of two (commutative and non-commutative) logics, and the exchange modality connects these two parts.
Categorical models of the CNC logic are given as monoidal adjunctions between monoidal bi-closed categories and SMCCs, that are called Lambek adjoint models.
In this subsection, we introduce the similar construction to the previous subsection, inducing Lambek adjoint models.
An exchange relational planar linear combinatory algebra (exch-rPLCA) consists of a -algebra and a comonadic applicative morphism (ξ,,) on with ∈ satisfying x (ξ y) (ξ z) ⊆ x (ξ z) (ξ y) for any x,y,z ∈.
When ξ is functional, we call (,ξ) an exchange planar linear combinatory algebra (exch-PLCA).
For an exch-rPLCA (, ξ), the co-Kleisli category is an SMCC and the co-Kleisli adjunction between and is monoidal.
* We define tensor products in as X Y := (|X| × |Y|,), where
x y := { pq |}.
* For maps f:X X' and g:Y Y' in , f g is the function sending x y to f(x) g(y).
A realizer of f g is z. M ( z), where
M ∈ pq. (r_ξ (ξ r_f) ( p)) (r_ξ (ξ r_g) ( q)).
* We define the unit object J of as ({∗}, _J), where ∗_J := {}.
* A realizer for the left unitor λ_X:J X X is u. ( p. p ) ( u).
A realizer for the inverse λ_X^-1 is in (ξ).
* A realizer for the right unitor ρ_X:X X J is in p. p (ξ).
A realizer for the inverse ρ_X^-1 is u. () ( u).
* A realizer for the associator α_XYZ :(X Y) Z X (Y Z) is u. ( v. M ( v)) ( u), where M ∈ pqr. p (r_ξ (r_ξ (ξ) ( q) ( r))).
A realizer for α_X^-1 is u. ( vw. (M' v) ( w)) ( u), where M' ∈ pq. (r_ξ (r_ξ (ξ) ( p) ( q))).
* The symmetry σ_XY:X Y Y X is the function sending x y to y x.
A realizer for σ_XY and σ_XY^-1 is u. () ( u).
* For objects X and Y, the exponential in is Y X = (_( X, Y), ), where f := { r ∈|}.
* For maps f:X' X and g:Y Y' in , g f is the function sending a map h:X Y in to g ∘ ( h) ∘ d_X ∘ ( f) ∘ d_X', where d_X : X X is the comultiplication of .
A realizer for g f is uv. r_g (r_ξ u ( (r_ξ (ξ r_f) ( v)))).
* The evaluation map ev_XY : (Y X) X Y is the function sending f x to f(x), that is realized by u. ( u).
* For any map f:Z X Y in , there exists a unique map g:Z Y X in , which sends z to x ↦ f(z x).
g is realized by uv.r_f (r_ξ (r_ξ (ξ) ( u)) ( v)).
* Finally we show that the co-Kleisli functor : is strong monoidal.
We can take natural isomorphisms J I and (X Y) X ⊗ Y in as the identity functions.
Realizers for J I and (X Y) X ⊗ Y are .
A realizer for J I is in u.u(ξ).
A realizer for X ⊗ Y (X Y) is in uv. r_ξ (r_ξ (ξ) ( u)) ( v).
The next proposition for categories of modest sets also can be shown in the same way as the above proposition.
Here sinceX Yin the above proof is not generally a modest set, we take the tensor product⊠in_by the same way as Proposition <ref>.
That is, we takeX ⊠ Y = (|Z|,_Z)by|Z| := (|X| × |Y|)/≈, where≈and_Zare defined as the same ones in the proof of Proposition <ref>.
For an exch-rPLCA (, ξ), the co-Kleisli category _ is an SMCC and the co-Kleisli adjunction between is monoidal.
Suppose is a bi--algebra and (, ξ) is an exch-rPLCA.
Then we have a Lambek adjoint model as the co-Kleisli adjunction between the monoidal bi-closed category and the SMCC (or between and _).
Similar to exp-rPLCAs, adjoint pairs between-algebras and-algebras correspond to exch-rPLCAs.
Let (δ⊣γ): be an adjoint pair for a -algebra and a -algebra .
* (,δ∘γ) forms an exch-rPLCA.
* (⊣): is a monoidal adjunction between the monoidal category and an SMCC . If is a bi--algebra, the adjunction is a Lambek adjoint model.
* From Proposition <ref> (<ref>), δ∘γ is a comonadic applicative morphism.
We can take
in as xyz. x ( (M ( y)( z))), where M ∈ y.r_δ (r_δ (δ N) y) and
N ∈ yz.r_γ(r_γ (γ) z)y.
* It follows from Proposition <ref>.
Similar to exp-PLCAs, exch-PLCAs induce adjoint pairs between-algebras and-algebras.
Let (, ξ) be an exch-PLCA.
* We have a -algebra _ξ = (,@) with x @ y := x(ξ y).
* Let γ:_ξ be the identity function and δ:_ξ be the function x ↦ξ x. Then γ and δ are applicative morphisms and δ⊣γ.
* We have the -combinator in _ξ as x. ( x).
* Same as the proof of Proposition <ref> (<ref>).
For an example of exch-PLCA, we have the similar calculus to Example <ref>.
Suppose infinite supply of variables x,y,z,….
Terms are defined grammatically as follows.
M ::= x | MM' | λ x.M | M ⊗ M' | x ⊗ x'MM' | ξ M | λ^ξ x.M
Here x of λ x.M is the rightmost free variable of M, appears exactly once in M and is not in any scope of ξ.
x of λ^ξ x.M need to appear exactly once in M.
Also we assume that for x ⊗ x'MM', x' and x are the rightmost and the next rightmost free variables of N, appear exactly once in N and are not in any scope of ξ.
The rest is the same as Example <ref>.
Finally we give an example of exch-PLCA based onof Example <ref>.
This example is similar to the one introduced in <cit.>.
Let T and | | be the same set and function defined in Example <ref>.
First we give a -algebra _e from T.
Take |_e| as the powerset of { t ∈ T | |t| =e },
and a binary operation ⊚ on |_e| as M⊚ N := { t_2 |∃ t_1 ∈ N ,(t_2 t_1) ∈ M }.
Then _e = (|_e|,⊚) is a -algebra, where
* = { (t_3 t_1) (t_2 t_1) (t_3 t_2) | t_1,t_2,t_3 ∈ T };
* = { (t_3 t_2 t_1) (t_3 t_1 t_2) | |t_1| = |t_2| = |t_3| =e };
* = { t_1 t_1 | t_1 ∈ T }.
Take γ: || |_e| as the function sending M to { t t | t ∈ M} and δ : |_e| || as the inclusion function.
Then these function forms an (functional) adjoint pair (δ⊣γ): _e.
Here corresponding realizers are
* { ((t_2 t_2) (t_1 t_1)) ((t_2 t_1) (t_2 t_1)) | t_1,t_2 ∈ T } realizing γ;
* { t_1 t_1 | t_1 ∈ T } realizing δ;
* { (t t) t | |t| = e } realizing id ≼γ∘δ;
* { t (t t) | |t| ≥ e } realizing δ∘γ≼ id.
The above construction also can be applied to obtain exch-PLCAs on ' of Example <ref> and on ” of Example <ref>.
While we gave exch-PLCAs by T, the same construction cannot be applied to obtain exp-PLCAs.
If we try to get some PCA of subsets of T, employing M⊚ N := { t_2 |∃ t_1 ∈ N ,(t_2 t_1) ∈ M } as the binary operation, ⊚ M ⊚ N = M hardly hold since the left hand side often lost information of M when N is nearly empty.
As we saw in Proposition <ref>, exp-rPLCAs can be defined in the style using not the combinators and , but the applicative morphisms [,] and k_i.
It is still unclear whether we can define exch-rPLCAs by the latter style, not using the combinator .
If we can characterize exch-rPLCAs by the latter style, we might construct exch-PLCAs using reflexive objects by the same way as exp-PLCAs and WPLCs (Definition <ref>).
§ RELATED WORK
This paper is an extended version of the earlier papers by the author <cit.>.
As a result of <cit.> not introduced in this paper, we have “ ()^∘-algebras” as a class of applicative structures. ()^∘-algebras are more general than-algebras, and give rise to skew closed categories of assemblies (or modest sets).
Skew closed categories, introduced in <cit.>, are categories with similar closed structures to closed categories, though some conditions needed in closed categories are not assumed.
(For instance, the natural transformationi_X : (X I) Xin a skew closed category is not necessarily invertible.)
Although skew closed categories and closed multicategories are generalizations of closed categories in different directions, from Proposition <ref>, we can say that we cannot (canonically) obtain(or) that is a closed multicategory but not a skew closed category.
Details of these results are given in Appendix <ref>.
Skew monoidal categories introduced in <cit.>
are categories with the same components as monoidal categories but natural transformations (left and right unitors and associators) do not need to be invertible.
The relationship between skew monoidal categories and skew closed categories is similar to that between monoidal categories and closed categories.
Recalling the proof of Proposition <ref>, we find that we useonly to realizeρ_X^-1:X ⊗ I X.
The invertibility ofρ_Xis not assumed in skew monoidal categories.
Thus, when we haveas a “()-algebra,” we can show thatis a skew monoidal category.
In <cit.>, the “extensionality” of combinatory algebras is investigated.
The extensionality defined in that paper is a more generalized condition than the standard one, seen in , <cit.>.
By the extensionality in <cit.>, we can deal with polynomials and combinatory completeness for combinatory algebras that cannot be stated in the same way as Definition <ref> and Proposition <ref>, such as the braided case.
In our study, we do not need the discussions of the extensionality to state the combinatory completeness appearing in this paper, however, assuming the extensionality on an applicative structuremay cause some structures onand.
For instance, for an “extensional”-algebra, since the-combinator always satisfies the axiom of,andbecome closed categories.
There are many other possible way to define classes of applicative structures than using the existence of certain combinators, and the extensionality is such one way.
The definition of bi--algebras may look like “dual combinators” introduced in <cit.>.
Similar to bi--algebras, in bianry operations of dual combinators, elements can act to elements from both left and right sides.
However, a dual combinatory logic has only one sort of application, whereas a bi--algebra has two sorts of applications.
Also the reductions of dual combinatory logic do not satisfy the confluence, while the confluence of the bi-planar lambda calculus holds.
In this paper, we referred several logics and their categorical models without recalling detailed definitions.
See <cit.> about the linear logic.
And for the MILL and the categorical models that we deal with in this paper, see <cit.>.
Also, for the Lambek calculus, the word “the Lambek calculus” has various means as logics, and we use this word to mean a variant of non-symmetric MILL with left and right implications in this paper.
Our treatment of the Lambek calculus and its categorical semantics are from <cit.>.
The basics about the Lambek calculus is in <cit.>.
In <cit.>, the relationships between the planar lambda calculus and planar graphs are investigated.
In that paper, the bijection between rooted trivalent planar graphs and closed planar lambda terms is given, and it is shown that such graphs can be generated by combining a few kinds of “imploid moves.”
The theory corresponds to the combinatory completeness of-algebras and the planar lambda calculus.
Similarly, we can give the bijection between rooted trivalent planar graphs and closed bi-planar terms, but here the rooted trivalent planar graphs need to have two colored (“left” and “right”) vertexes.
§ CONCLUSION
In section <ref> and <ref>, we introduced several classes of applicative structures and showed that they induce closed structures on categories of assemblies and categories of modest sets, as in Table <ref>. (The results for-algebras are newly presented in this paper.)
In section <ref>, we showed that these classes are different ones by giving several examples.
In section <ref>, we presented propositions that categorical structures ofinduce structures of, under some conditions.
(The propositions for-algebras and bi--algebras are newly shown in this paper.)
By combining the results of the above, for instance we can say that we havewith a truly non-symmetric bi-closed structures, by usingthat is a bi--algebra but not a-algebra.
In section <ref>, we introduced exp-rPLCAs and exch-rPLCAs that give rise to categorical models for the linear exponential modality and the exchange modality on the non-symmetric MILL.
As an adjoint pair between a-algebra and a PCA induces an rLCA, an adjoint pair between-algebras and a PCA/-algebra induces an exp-rPLCA/exch-rPLCA.
Finally we give three issues for future work.
First, there are several unsolved problems we mentioned in this paper.
Those that we consider important are:
* to show that the computational lambda calculus is not a -algebra (refer Section <ref>);
* to clarify conditions needed to show that is a PCA when (or ) is a CCC (refer Remark <ref>);
* to clarify conditions needed to show that is a -algebra/bi--algebra when is a monoidal closed category/monoidal bi-closed category (refer the end of Section <ref>).
Second, most examples given in this paper are the standard ones like the term models.
We would like to find more interesting examples of applicative structures and adjoint pairs, that should be useful for investigating non-commutative logics and their models in a systematic way.
Third, for various categorical structures not given in this paper, we want to clarify what we need to construct them via categorical realizability.
For instance, we have said (in section <ref>) that we cannot give(nor) that is a symmetric closed category but not an SMCC, in canonical ways.
Also we cannot give(nor) that is a closed multicategory but not a skew closed category.
As an example not yet mentioned, we cannot makea braided monoidal category but not an SMCC.
Although there is a class of applicative structure,^±-algebras, nicely corresponding the structure of braided monoidal categories and the braided lambda calculus (investigated in <cit.>), the construction ofcannot reflect the difference between two sorts of braids (realized by^+and^-) and turns braids into the symmetry.
To give the categorical structures listed above, we need to change the construction of(and), rather than trying to give conditions on applicative structures.
For instance, to makea braided monoidal category (not an SMCC), we may need to change that the construction ofis based on, that is not only braided but also symmetric.
§ ACKNOWLEDGMENT
I would like to thank Masahito Hasegawa for a lot of helpful advice, discussions and comments.
This work was supported by JST SPRING, Grant Number JPMJSP2110.
alphaurl
§ -ALGEBRAS, -ALGEBRAS AND SKEW CLOSED CATEGORIES
Though classes of applicative structures appearing in this paper are subclasses of-algebras, it does not conclude that realizability constructions for closed structures all require-algebras.
Indeed, in <cit.>, we introduced-algebras, which is a more general class than-algebras and gives rise to skew closed categories.
First we recall the definition of skew closed categories from <cit.>.
A (left) skew closed category consists of the following data:
* a locally small category ;
* a functor (- -):^op×, called the internal hom functor;
* an object I, called the unit object;
* an natural transformation i_X : (X I) X;
* an extranatural transformation j_X : I (X X);
* a transformation L_Y,Z^X : (Z Y) ((Z X) (Y X)) natural in Y and Z and extranatural in X,
such that the following axioms hold:
* ∀ X,Y ∈, L_Y,Y^X ∘ j_Y = j_(Y X);
* ∀ X,Y ∈, i_(Y X)∘ (id_(Y X) j_X) ∘ L_X,Y^X = id_(Y X);
* ∀ X,Y,Z,W ∈, the following diagram commutes:
@C=-25pt@R=30pt (W Z) [dl]_L_Z,W^X[dr]^L_Z,W^Y
(W X) (Z X) [d]^-L_(Z X),(W X)^(Y X)
((W Y) (Z Y)) [dd]^-L_Y,W^X id
((W X) (Y X)) ((Z X) (Y X)) [drr]_-id L_Y,Z^X
((W X) (Y X)) (Z Y)
* ∀ X,Y ∈, (i_Y id_(X I)) ∘ L_X,Y^I = id_Y i_X;
* i_I ∘ j_I = id_I.
A skew closed category is called left normal when the function γ : (X,Y)(I,Y X) sending f:X Y to
(f id_X) ∘ j_X is invertible for any X,Y ∈.
There is a categorical structure called skew monoidal categories introduced in <cit.>, which have the same components as monoidal categories but the invertibility of unitors and associators are not assumed.
Skew closed categories are the categorical structures determined from skew monoidal categories, like closed categories are determined from monoidal categories.
Obviously, closed categories are also left normal skew closed categories.
We investigated categorical realizability for skew closed categories in <cit.> and next we recall some of the results.
A total applicative structure is a -algebra iff it contains , , and a for each a ∈ is an element of such that ∀ x,y ∈, (a) xy = x (ay).
Since(a) xy=x(ay), any-algebra is also a-algebra.
By the similar way to the proof of Proposition <ref>, we can show the class of-algebras is different from the class of-algebras by using a freely constructed-algebra (with constants).
When is a -algebra, and are left normal skew closed categories.
The proof is almost the same as Proposition <ref>.
Here for mapsfandg, we give a realizer of(g f)as(r_f) ( r_g).
It is still not clear whetherneed to be a-algebra to make(or) a skew closed category, like propositions in Section <ref>.
(In the similar setting to Proposition <ref> and Proposition <ref>, though we can show the existence of,and(), we cannot show there is.)
Since-algebras are-algebras, the next holds.
When is a -algebra, and are skew closed categories.
From Proposition <ref>, we can say that we cannot (canonically) obtain(nor) that is a closed multicategory but not a skew closed category.
Although closed multicategories are a generalized closed categorical structure in a different direction from skew closed categories, skew closed categories are more general than closed multicategories as the categorical structures appearing in categories of assemblies.
Moreover, when constructing applicative structures from reflexive objects, skew closed categories can give even-algebras, as well as closed multicategories give.
Suppose a skew closed category and an object V with a retraction
r : (V V) ◃ V :s.
Then (I,V) forms a -algebra.
* For M,N:I V, the application is defined as
I M V s V V id_V N V I i_V V.
* The -combinator is
I j_V V (V V) (V V) L_V,V^V s ((V V) (V V)) V
(r s) id_V V V V r id_V V V r V.
* The -combinator is r ∘ j_V.
* Given arbitrary M:I V, M is
I j_V V V s id_V (V V) V (id_V M) id_V (V I) V i_V id_V V V r V.
Suppose a closed multicategory and an object V with a retraction
r : (V;V) ◃ V :s.
Then (;V) forms a -algebra.
* For M,N ∈(;V), the application is defined as
M,N V,V s,id_V(V;V),V ev V.
* Take a map f:V,V,V V as
V,V,V id_V,s,id_V V,(V;V),V s,ev(V;V),V ev V.
The -combinator is given as r ∘Λ_;V;V (r ∘Λ_V;V;V(r ∘Λ_V,V;V;V(f))). Here Λ is the function in Definition <ref>.
* The -combinator is r ∘Λ_;V;V(id_V).
* Given arbitrary M ∈(;V), M is r ∘Λ_;V;V(ev ∘ (s,M)).
When we assume the retractionr : (V V) Vof Example <ref> is an isomorphism, the-combinator further satisfies the axiom ofand(I,V)forms a-algebra.
Similarly, when we assume the retractionr : (V;V) Vof Example <ref> is an isomorphism, the-combinator satisfies the axiom ofand(;V)forms a-algebra. |
http://arxiv.org/abs/2307.04502v1 | 20230710114651 | Modular Completely Dirichlet forms as Squares of Derivations | [
"Melchior Wirth"
] | math.OA | [
"math.OA",
"math-ph",
"math.FA",
"math.MP",
"quant-ph"
] |
We prove that certain closable derivations on the GNS Hilbert space associated with a non-tracial weight on a von Neumann algebra give rise to GNS-symmetric semigroups of contractive completely positive maps on the von Neumann algebra.
Distributed Decisions on Optimal Load Balancing
in Loss Networks
Qiong Liu1, Chenhao Wang2, Ce Zheng1
1Télécom Paris, Institut Polytechnique de Paris, France
2Beijing Normal University, China
Email: [email protected], [email protected], [email protected]
==========================================================================================================================================================================================================================
§ INTRODUCTION
The interplay between derivations and symmetric semigroups of unital (or contractive) completely positive maps has proven fruitful for applications in quantum information theory <cit.>, operator algebras <cit.> and beyond. Using the framework of completely Dirichlet forms, this connection is particularly well-understood in the case of tracially symmetric semigroups after the seminal work of Cipriani and Sauvageot <cit.>.
In many situations however one encounters non-tracial reference states or weights: In quantum statistical mechanics, the reference state is typically a Gibbs state, which is not a trace at finite temperature; in quantum probability in the study of Lévy processes on compact quantum groups, the natural reference state is the Haar state, which is only a trace for the class of compact quantum groups of Kac type; and in the structure theory of von Neumann algebras, one is faced with non-tracial states when the von Neumann algebra has a non-trivial type III summand.
In the non-tracial setting, the connection between derivations and symmetric semigroups of completely positive maps is much less understood. Recently, it was shown by the author that every GNS-symmetric semigroup of unital completely positive maps gives rise to a canonical derivation via its associated Dirichlet form <cit.>. This result was (partially) extended to KMS-symmetric semigroups by Vernooij and the author <cit.>.
There has also been work in the opposite direction – starting with a derivation to construct a completely Dirichlet form <cit.>. However, these results all rely on additional structural assumptions on the derivation, usually some form of (approximate) innerness. This means that natural examples like derivations arising from cocycles on non-unimodular groups or Voiculescu's derivation in non-tracial free probability could not be treated in this framework.
In this article, we prove in a general context that closable derivations give rise to GNS-symmetric semigroups of completely bounded maps. More precisely, our main result is the following.
Let Å be a Tomita algebra, $̋ a normal Tomita bimodule overÅandδÅ→$̋ a closable symmetric derivation. Let ℰ be the closure of the quadratic form _0 given by (_0)=(δ) and _0(a)=δ(a)_^̋2. Then the strongly continuous semigroup associated with is the GNS implementation of a GNS-symmetric semigroup of contractive completely positive maps on the left von Neumann algebra generated by Å.
Here a normal Tomita bimodule is a bimodule over a Tomita algebra that additionally carries a complex one-parameter group (_z) and an involution satisfying some compatibility conditions, and a symmetric derivation δÅ→$̋ is a map that intertwines the complex one-parameter groups and involutions onÅand$̋ and satisfies the product rule
δ(ab)=aδ(b)+δ(a)b.
These objects were introduced in <cit.> and appear to be the natural non-tracial analogs of the Hilbert bimodule and derivation occurring in the context of completely Dirichlet forms on tracial von Neumann algebras.
Combined with the results from <cit.>, we thus obtain a comprehensive picture of GNS-symmetric quantum Markov semigroups analogous to the result of Cipriani and Sauvageot for tracially symmetric semigroups. Among other potential applications, we hope that this result opens the gate for applications to non-tracial free probability and deformation/rigidity theory of type III von Neumann algebras similar to recent work in this direction in the tracial case.
One main difficulty when trying to prove that closable derivations generate completely Dirichlet forms (or semigroups of completely positive maps) is that the property defining derivations, the product rule, is an algebraic property, while Dirichlet forms are defined in terms of order properties, and the domain of a derivation is not necessarily closed under order operations. As such, the problem of properly dealing with domains is crucial. Note that it is unavoidable to allow for unbounded derivations as everywhere defined derivations yield norm continuous semigroups of completely positive maps, which is too restrictive for many applications.
In the tracial case, this difficulty can be overcome since order operations such as taking the positive part can be expressed in terms of functional calculus and as such can be approximated by polynomials. In the non-tracial case, the order operations can still be expressed in terms of functional analysis in the setup of Haagerup L^p spaces, but the product rule is formulated in terms of Hilbert algebra multiplication, which is different from the product of two operators in Haagerup L^2 (which is only in L^2 if it is zero). Therefore it is not clear how to connect the two.
Instead of trying to follow the proof in the tracial setting, our proof strategy instead relies on Haagerup reduction method, which allows to embed a von Neumann algebra as an expected subalgebra of a bigger von Neumann algebra that can be approximated by finite von Neumann algebras. As it turns out, this reduction method is well-suited to reduce the problem at hand to the known case of tracial von Neumann algebras. One key challenge are again domain issues: For the Haagerup construction one has to extend the derivation to a domain on a crossed product that is sufficiently big, but such that the extension still satisfies the product rule. The essential new technical ingredient to overcome this kind of domain problems lies in the introduction of a new locally convex topology on the domain of a derivation that allows to extend derivations to derivations on a completion.
As a final note, considering the results from <cit.>, it is a natural question whether the results from the present article can be extended to cover KMS-symmetric semigroups. For one, our methods crucially use commutation with the modular group, which fails for KMS-symmetric maps if they are not GNS-symmetric. But more severely, it seems like there are additional algebraic obstructions, already in finite dimensions: It is shown in <cit.> that if is a completely Dirichlet form on L^2(M_n(),ϕ), then there exist self-adjoint matrices v_j∈ M_n() such that
(ρ^1/4xρ^1/4)=∑_j tr(ρ^1/4[v_j,x]ρ^1/4^2),
where ρ is the density matrix inducing the state ϕ on M_n(). However, without further assumptions on the operators v_j, the quadratic form on the right side of the previous equation is not necessarily a completely Dirichlet form.
§.§ Outline of the article
In Section <ref> we recall some basics regarding modular theory, completely Dirichlet forms on standard forms of von Neumann algebras and Tomita bimodules and derivations. In Section <ref> we introduce a topology on the domain of a derivation, the δ-topology, and show that derivations can be extended to derivations on the completion in the δ-topology. In Section <ref> we give a closability criterion for derivations in our setting. In Section <ref> we discuss how derivations can be extended to crossed products and discuss how completely Dirichlet forms behave with respect to change of the reference weight. Then we state and prove the main result of this article, Theorem <ref>, showing that the quadratic form associated with a closable derivation is a modular completely Dirichlet form. Finally, in Section <ref> we discuss several classes of examples, including inner derivations, derivations arising in non-tracial free probability and derivation induced by cocycles on (possibly non-unimodular) locally compact groups.
§.§ Acknowledgments
The author was funded by the Austrian Science Fund (FWF) under the Esprit Programme [ESP 156]. For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
§ BASICS
In this section we briefly recap some material concerning modular theory and in particular Hilbert and Tomita algebras, completely Dirichlet forms, Tomita bimodules and derivations that is used in the later sections.
§.§ Modular theory
As our approach is formulated in the language of Hilbert and Tomita algebras, we summarize the relevant definitions here. Our treatment mostly follows <cit.>.
An algebra Å with involution ^♯ (resp. ^♭) and inner product ⟨ · ,· ⟩ is called left (resp. right) Hilbert algebra if
* for every a∈Å the map π_l(a)Å→Å, b↦ ab (resp. b↦ ba) is bounded,
* ⟨ ab,c⟩=⟨ b,a^♯ c⟩ (resp. ⟨ ab,c⟩=⟨ b,ca^♭⟩) for all a,b,c∈Å,
* the involution ^♯ (resp. ^♭) is closable,
* the linear span of all products ab with a,b∈Å is dense in Å.
Let M be a von Neumann algebra and ϕ a normal semi-finite faithful weight on M. We write _ϕ for the definition ideal {x∈ M|ϕ(x^∗ x)<∞} and (π_ϕ,L^2(M,ϕ),Λ_ϕ) for the associated semi-cyclic representation.
The prototypical example of a left Hilbert algebra is Å=Λ_ϕ(_ϕ∩_ϕ^∗) with the product Λ_ϕ(x)Λ_ϕ(y)=Λ_ϕ(xy), the involution Λ_ϕ(x)^♯=Λ_ϕ(x^∗) and the inner product inherited from L_2(M,ϕ), that is, ⟨Λ_ϕ(x),Λ_ϕ(y)⟩=ϕ(x^∗ y). In this case, π_l(Å)^''=π_ϕ(M). We write Å_ϕ for this left Hilbert algebra.
Conversely, every left Hilbert algebra Å gives rise to a von Neumann algebra π_l(Å)^'' acting on the completion of Å and a weight
ϕπ_l(Å)^''_+→ [0,∞], ϕ(x)=ξ^2 if x^1/2=π_l(ξ),
∞ otherwise.
If Å is a full left Hilbert algebra <cit.>, then ϕ is a normal semi-finite faithful weight on π_l(Å)^'', and Å is canonically isomorphic to Å_ϕ.
Let be the completion of the left Hilbert algebra Å. Since the involution ^♯ on Å is closable, its closure S on exists and has a polar decomposition S=JΔ^1/2. The operator Δ is a non-singular positive self-adjoint operator, called the modular operator, and J is an anti-unitary involution, called the modular conjugation. If Å is the left Hilbert algebra associated with a weight ϕ, we write Δ_ϕ and J_ϕ for the associated modular operator and modular conjugation. We write Λ_ϕ^'_ϕ^∗→ L_2(M,ϕ) for the map x↦ J_ϕΛ_ϕ(x^∗).
If Å is full, the modular conjugation J gives rise to the positive self-dual cone P={π_l(a)Ja| a∈Å} and π_l(Å)^'' is in standard form <cit.>.
The modular operator Δ gives rise to a point weak^∗ continuous group of automorphisms x↦Δ^itxΔ^-it on π_l(Å)^''. If ϕ is a normal semi-finite faithful weight on M, the group σ^ϕ given by σ^ϕ_t(x)=π_ϕ^-1(Δ_ϕ^itπ_ϕ(x)Δ_ϕ^-it) is called the modular group associated with ϕ.
If (α_t)_t∈ is a point weak^∗ continuous group of ∗-automorphisms on M, then an element x∈ M is called entire analytic if the map t↦α_t(x) has an extension z↦α_z(x) to the complex plane such that z↦ω(α_z(x)) is analytic for every ω∈ M_∗. The entire analytic elements form a weak^∗ dense ∗-subalgebra of M.
A Tomita algebra is a left Hilbert algebra Å endowed with a complex one-parameter group (U_z)_z∈ of algebra automorphism such that
* z↦⟨ a,U_z b⟩ is analytic for all a,b∈Å,
* (U_z a)^♯=U_z̅(a^♯) for all a∈Å, z∈,
* ⟨ U_z a,b⟩=⟨ a,U_-z̅b⟩ for all a,b∈Å, z∈,
* ⟨ a^♯,b^♯⟩=⟨ U_-ib,a⟩ for all a,b∈Å.
Note that every Tomita algebra becomes a right Hilbert algebra when endowed with the involution
Å→Å,a↦ a^♭=U_-i(a^♯).
For a full left Hilbert algebra Å let
Å_0={ξ∈⋂_n∈D(Δ^n) | Δ^nξ∈Å for all n∈}.
For every ξ∈Å_0 the map t↦Δ^itξ has an entire analytic extension z↦ U_zξ with U_zξ∈Å_0 for all z∈. This makes Å_0 into a Tomita algebra such that π_l(Å_0)^''=π_l(Å)^''.
In particular,
(Å_ϕ)_0={Λ_ϕ(x)| x∈_ϕ∩_ϕ^∗, x entire analytic for σ^ϕ}.
§.§ Completely Dirichlet forms
Completely Dirichlet forms in the non-tracial setting were introduced by Goldstein and Lindsay <cit.> in the language of GNS Hilbert spaces of states (or weights) and by Cipriani <cit.> in the language of standard forms with a fixed cyclic vector. Our approach is somewhat different from both of these formulations in that we use left Hilbert algebras, but in view of the previous subsection it is equivalent to the formulation by Goldstein–Lindsay (and to that of Cipriani in case the left Hilbert algebra has a unit).
Let Å be a full left Hilbert algebra with completion . Let C be the closure of {Δ^1/4a| a∈Å, 0≤π_l(a)≤ 1} and let P_C be the metric projection onto C. We say that a closed densely defined quadratic form on is a Dirichlet form with respect to Å if ∘ J= and (P_C(a))≤(a) for all a∈ with Ja=a.
The Dirichlet form is called completely Dirichlet form if for every n∈ the quadratic form
^(n)⊗ M_n()→ [0,∞], ^(n)([ξ_ij])=∑_i,j=1^n (ξ_ij)
is a Dirichlet form with respect to Å⊙ M_n(). Here M_n() carries the normalized Hilbert–Schmidt inner product and the multiplication and involution on Å⊙ M_n() are given by [a_ij][b_ij]=[∑_k a_ikb_kj], [a_ij]^♯=[a_ji^♯].
A (completely) Dirichlet form with respect to Å is called modular (or GNS-symmetric) if ∘ U_t= for all t∈.
Completely Dirichlet forms are of particular interest for their connection to semigroups of contractive completely positive maps on von Neumann algebras. Let us briefly sketch this correspondence. Proofs can be found in <cit.> for the wider class of KMS-symmetric semigroups. The result for GNS-symmetric semigroups follows from the fact that GNS symmetry is equivalent to KMS symmetry and commutation with the modular group (see <cit.> for example).
Let M be a von Neumann algebra. A quantum dynamical semigroup is a semigroup of normal contractive completely positive operators on M that is continuous in the point weak^∗ topology. If ϕ is a normal semi-finite faithful weight on M, a quantum dynamical semigroup (P_t) is called GNS-symmetric with respect to ϕ if ϕ∘ P_t≤ϕ for all t≥ 0 and
ϕ(P_t(x)^∗ y)=ϕ(x^∗ P_t(y))
for all x,y∈_ϕ and t≥ 0.
Every GNS-symmetric quantum dynamical semigroup gives rise to a strongly continuous semigroup (T_t) on L^2(M,ϕ), its GNS implementation, acting by T_tΛ_ϕ(x)=Λ_ϕ(P_t(x)) for x∈_ϕ, and the associated quadratic form is a modular completely Dirichlet form with respect to Å_ϕ. Vice versa, the strongly continuous semigroup associated with a modular completely Dirichlet form is the GNS implementation of a GNS-symmetric quantum dynamical semigroup.
We call a completely Dirichlet form a quantum Dirichlet form if the associated quantum dynamical semigroups consists of unital maps. A criterion in terms of the form itself is given in <cit.>.
§.§ Tomita bimodules and derivations
Tomita bimodules were introduced in <cit.> as codomains of the derivations associated with modular completely Dirichlet forms.
Let Å be a Tomita algebra. A Tomita bimodule over Å is an inner product space $̋ endowed with non-degenerate commuting left and right actions ofÅ, an anti-isometric involution→̋$̋ and a complex one-parameter group (_z) of isometries such that
* aξ b≤π_l(a)π_r(b)ξ for a,b∈Å, ξ∈$̋,
*⟨aξb,η⟩=⟨ξ, a^♯ηb^♭⟩fora,b∈Å,ξ,η∈$̋,
* _z(aξ b)=(U_z a)(_z ξ)(U_z b) for a,b∈Å, ξ∈$̋,z∈,
*(aξb)=(Jb)(ξ)(Ja)fora,b∈Å,ξ∈$̋,
* _z =_z̅ for z∈.
Let ̋̅ be the completion of $̋. The first two bullet points imply thatπ_l(a)↦(ξ↦aξ)extends to a non-degenerate∗-homorphism fromπ_l(Å)toB(̋̅). If this map can be extended to a normal∗-homomorphism fromπ_l(Å)^''toB(̋̅), then we say that$̋ is a normal Tomita bimodule. Requiring normality for the right action instead leads to the same notion of normal Tomita bimodule.
If Å is a Tomita algebra and $̋ a bimodule overÅ, we call a linear mapδÅ→$̋ a derivation if it satisfies the product rule
δ(ab)=aδ(b)+δ(a)b
for a,b∈Å. If $̋ is a Tomita bimodule overÅ, we say that a derivationδÅ→$̋ is symmetric if δ∘ J=∘δ and δ∘ U_z=_z∘δ for all z∈.
If Å is a full left Hilbert algebra and a modular quantum Dirichlet form with respect to Å, it is shown in <cit.> that
Å_={a∈Å_0| U_z a∈() for all z∈}
is a Tomita subalgebra of Å_0 and a core for . Moreover, by <cit.> there exists a Tomita bimodule $̋ overÅand a symmetric derivationδÅ_→$̋ such that
(a,b)=⟨δ(a),δ(b)⟩_
for a,b∈Å_.
Under the minimality condition =̋lin{δ(a)b| a,b∈Å_}, the pair (,̋δ) is uniquely determined by up to isometric isomorphism preserving the Tomita bimodule structure and intertwining the derivations <cit.>. By a slight abuse of notation, any such pair (,̋δ) is called the first-order differential structure associated with . If $̋ is a normal Tomita bimodule, the quantum Dirichlet formis called Γ-regular. A characterization in terms of the carré du champ is given in <cit.>.
§ Δ-TOPOLOGY AND COMPLETENESS
In this section we introduce a locally convex topology on the domain of a closable symmetric derivation, called theδ-topology. This topology is strong enough to ensure that the derivation extends to a derivation on the completion, which is a key technical ingredient in the proof of the main theorem later.
For the definition of theδ-topology recall that the Mackey topologyτ(M,M_∗)on a von Neumann algebraMis the finest linear topology𝒯onMsuch that the topological dual of(M,𝒯)isM_∗. Equivalently, it is the finest locally convex topology onMthat coincides with the strong^∗topology on norm bounded sets <cit.>. It has the advantage over the other usual locally convex topologies onMof being complete, which is convenient for several of the following arguments.
LetÅbe a Tomita algebra with completion, let$̋ be a normal Tomita bimodule over Å and δÅ→$̋ a symmetric derivation. We define theδ-topology𝒯_δonÅas the coarsest locally convex topology that makes the maps
Å→⊕̋̅, a↦(Δ^n a,δ(Δ^n a))
continuous with respect to the norm topology on⊕̋̅for alln∈and the maps
Å→ B(), a↦π_l(Δ^n a)
continuous for the Mackey topology onB()for alln∈. Clearly, theδ-topology is stronger than the topology induced by the graph norm(·_^2+δ(·)_^̋2)^1/2.
If is a Γ-regular modular quantum Dirichlet form and (,̋δ) the associated first-order differential structure, then Å_ is complete in the δ-topology.
Let (a_j) be a Cauchy net in Å_ with respect to the δ-topology. In particular, (Δ^n a_j,δ(Δ^n a_j))_j is Cauchy in ⊕̋̅ for all n∈. Since Δ^n and δ are closable on Å_, it follows that there exists a∈ such that (Δ^n a_j,δ(Δ^n a_j))→ (Δ^n a,δ̅(Δ^n a)) for all n∈. In particular, a∈⋂_n∈(Δ^n) and Δ^n a∈(δ̅)=() for all n∈.
Moreover, as the Mackey topology is complete, for n∈ there exists x_n∈ B() such that π_l(Δ^n a_j)→ x_n with respect to τ(B(),B()_∗). For b∈Å_ we have
x_n b=lim_j π_l(Δ^n a_j)b=lim_j π_r(b)Δ^n a_j=π_r(b)Δ^n a.
Since Å_ is dense in , it follows that Δ^n a∈Å_^'' and π_l(Δ^n a_j)→π_l(Δ^n a) for all n∈.
Altogether we conclude that a∈Å_ and a_j→ a in the δ-topology.
If Å is a Tomita algebra, $̋ a Tomita bimodule overÅandδÅ→$̋ a closable symmetric derivation, the inclusion of Å into its completion extends to an injective map from the completion of Å in the δ-topology to .
We have to show that if (a_j) is a Cauchy net in Å with respect to the δ-topology and a_j→ 0 in , then a_j→ 0 in the δ-topology. Since Δ^n, n∈, and δ are closable, we have (Δ^n a_j,δ(Δ^n a_j))→ 0 for all n∈. Furthermore, using the completeness of the Mackey topology and a similar argument as in the previous lemma, one sees that π_l(Δ^n a_j)→ 0 in τ(B(),B()_∗) for all n∈. Hence a_j→ 0 in the δ-topology.
LetÅ^δdenote the set of all elementsa∈for which there exists a net(a_j)inÅsuch thata_j→ainand(a_j)is Cauchy in theδ-topology. By the previous lemma,Å^δis a completion ofÅin theδ-topology, and we call it simply theδ-completion ofÅ. It is not hard to see thatÅ^δis a Tomita subalgebra of(Å^'')_0and contained in(δ̅).
Recall that ifis a normal Tomita bimodule overÅ, we can continuously extend the left and right action ofÅand the mapsand_t,t∈, to the Hilbert completion̋̅. This is usually not possible for_z,z∈∖. We define
^̋a={ξ∈̋̅| t↦_tξ has an entire extension}.
If it exists, this entire extension is unique and will be denoted byz↦_zξ. Clearly,⊂̋^̋aand_z⊂_zfor allz∈.
If we endow ^̋a with the coarsest locally convex topology that makes
^̋a→̋̅, ξ↦_inξ
continuous for all n∈, then ^̋a is complete.
If A is the unique non-singular positive self-adjoint operator in ̋̅ such that _t=A^it for t∈, then ^̋a=⋂_n∈(A^n) and ^̋a is the projective limit of the Banach spaces ((A^n)∩(A^-n),·_̋̅+A^n · _̋̅+A^-n · _̋̅) in the topology described in the lemma. In particular, ^̋a is complete.
Since$̋ is a normal Tomita bimodule over Å, the Hilbert completion ̋̅ has a canonical structure of a π_l(Å)^''-π_l(Å)^'' correspondence determined by
π_l(a)·ξ· Jπ_r(b)^∗ J=aξ b
for a,b∈Å and ξ∈$̋.
Ifa∈Å^δ,(a_j)is a net inÅsuch thata_j→ain theδ-topology andξ∈̋̅, then
_t(π_l(a)·ξ)=lim_j _t(π_l(a_j)ξ)=lim_j π_l(U_t a_j)_tξ=π_l(U_t a)·_tξ.
Thus, ifξ∈^̋a, thenz↦π_l(U_z a)·_zξis an entire continuation oft↦_t(π_l(a)·ξ), which impliesπ_l(a)ξ∈^̋a. Likewise, ifb∈Å^δ, thenξ·Jπ_r(b)^∗J∈^̋a. It is then routine to check that the bimodule structure given byaξb=π_l(a)·ξ·Jπ_r(b)^∗J, then group(_z)_z∈and the restriction ofmake^̋ainto a Tomita bimodule overÅ^δ.
If Å is a Tomita algebra, $̋ is a Tomita bimodule overÅandδÅ→$̋ is a closable symmetric derivation with closure δ̅, then δ̅(Å^δ)⊂^̋a and δ̅Å^δ→^̋a is a symmetric derivation.
If a∈Å^δ and (a_j) is a net in Å such that a_j→ a in the δ-topology, then
_tδ̅(a)=lim_j _t δ(a_j)=δ(U_t a_j)=δ̅(U_t a).
It follows that t↦_tδ̅(a) has the entire continuation z↦δ̅(U_z a), which implies δ̅(a)∈̋̅^a. Again, routine computations show that the restriction of δ̅ to Å^δ is a symmetric derivation from Å^δ to ^̋a.
§ CLOSABILITY OF DERIVATIONS
In this section we give a simple criterion for the closability of derivations inspired by a well-known result (see <cit.> and <cit.> for the non-tracial case) on the closability of the derivation used in free probability.
IfÅis a Tomita algebra and$̋ is a Tomita bimodule over Å, we say that ξ∈$̋ is a bounded vector if there existsC>0such thataξb≤Cabfor alla,b∈Å. In this case, the mapsa↦aξandb↦ξbextend to bounded linear operators from the completionofÅto$̋, which we denote by R(ξ) and L(ξ), respectively.
Let Å be a Tomita algebra, $̋ a normal Tomita bimodule overÅandδÅ→$̋ a derivation. If δ(Å) is contained in the space of bounded vectors, then (δ^∗) is a subbimodule of $̋ and
δ^∗(aξ b)=a^∗δ^∗(ξ)b-L(δ(a^∗))^∗(ξ b)-R(δ(b^∗))^∗ (aξ)
fora,b∈Åandξ∈(δ^∗).
Let a,b,c∈Å and ξ∈(δ^∗). By the product rule,
⟨ aξ b,δ(c)⟩ =⟨ξ,a^∗δ(c)b^∗⟩
=⟨ξ,δ(a^∗ c b^∗)-δ(a^∗)cb^∗-a^∗ cδ(b^∗)⟩
=⟨ aδ^∗(ξ)b-L(δ(a^∗))^∗(ξ b)-R(δ(b^∗))^∗ (aξ),c⟩.
Thus aξ b∈(δ^∗) and the claimed identity for δ^∗(aξ b) holds.
Let Å be a Tomita algebra, $̋ a normal Tomita bimodule over$̋ and δÅ→$̋ a derivation. Ifδ(Å)is contained in the space of bounded vectors and(δ^∗)is a cyclic subset, thenδis closable.
By the previous lemma, (δ^∗) is a subbimodule of $̋. Hence, if(δ^∗)is cyclic, then it is dense in$̋. Therefore, δ is closable.
§ COMPLETELY DIRICHLET FORMS ASSOCIATED WITH CLOSABLE DERIVATIONS
In this section we prove the main theorem of this article, Theorem <ref>, showing that the closure of the quadratic form associated with a closable symmetric derivation is a modular completely Dirichlet form.
As mentioned in the introduction, we rely on Haagerup's reduction method. To set up the stage for its use, we first discuss crossed products of Tomita algebras and Tomita bimodules. To extend closable symmetric derivations to a sufficiently large domains on the crossed product, we use theδ-completion technique developed in Section <ref>. Further, to reduce the problem to the tracial case, we need a “change of reference weight” argument and an analysis of approximation properties of completely Dirichlet forms. This will be dealt with in the following lemmas. Finally, in Proposition <ref> we discuss the relation between the derivation we started with and the first-order differential structure of the associated completely Dirichlet form.
LetÅbe a Tomita algebra,$̋ a normal Tomita bimodule over Å and δÅ→$̋ a closable symmetric derivation. Throughout this section we endowÅwith theδ-topology and$̋ with the projective topology induced by the maps →̋̋̅, ξ↦_inξ for n∈, and we assume that Å and $̋ are complete in these topologies. As discussed in Section <ref>, this can always be achieved by passing to the completions.
LetGbe a countable subgroup of, viewed as discrete group. The vector spaceC_c(G;Å)≅ C_c(G)⊙Åcan be made into a Tomita algebra by the operations
(a∗ b)(g) =∑_h∈ GU_-ha(g-h)b(h),
a^♯(g) =U_-g(a(-g)^♯),
(U_z a)(g) =U_z a(g).
Moreover, the vector spaceC_c(G;)̋becomes a normal Tomita bimodule overC_c(G;Å)with the operations
(aξ)(g) =∑_h∈ GU_-h a(g-h)ξ(h)
(ξ b)(g) =∑_h∈ G_-hξ(g-h)b(h)
(ξ)(g) =_-gξ(-g)
(_z ξ)(g) =_z ξ(g).
Furthermore,1_C_c(G)⊙δ C_c(G;Å)→ C_c(G;)̋is a closable symmetric derivation, whose closure we denote by1⊗δ̅.
We writeÅ̃for the(1⊙δ)-completion ofC_c(G;Å),̋̃forC_c(G;)̋^aandδ̃for the restriction of1⊗δ̅toÅ̃. By Lemma <ref> the mapδ̃is a (closable) symmetric derivation fromÅ̃tő̃.
If x∈ L(G)⊗ 1_̋̅ and a∈Å̃, then x a, a x∈Å̃ and δ̃(x a)=xδ̃(a), δ̃(a x)=δ̃(a)x.
Let x=y⊗ 1 with y∈ L(G), let (y_i) be a bounded net in [G] such that y_i→ y in the strong^∗ topology and let x_i=y_i⊗ 1. Clearly, x_i→ x in the Mackey topology.
If a∈ C_c(G;Å), then x_i a∈ C_c(G;Å) and Δ^n(x_i a)=x_i Δ^n a, (1⊙δ)(x_i a)=x_i(1⊙δ)(a), π_l(Δ^n (x_i a))=x_iπ_l(Δ^n a). It follows that x_i a→ xa in ℓ^2(G;), the net (x_i a) is Cauchy in the δ̃-topology and (1⊙δ)(x_i a)→ x(1⊙δ)(a). Thus xa∈Å̃ and δ̃(xa)=xδ̃(a).
A similar argument shows that if (a_j) is a Cauchy net in C_c(G;Å) with respect to 𝒯_δ̃ and a_j→ a in ℓ^2(G;)̋, then (x a_j) is Cauchy with respect to 𝒯_δ̃ and xa_j→ xa in ℓ^2(G;)̋. Hence if a∈Å̃, then xa∈Å̃ and δ̃(xa)=xδ̃(a). The statement for ax can be proven analogously.
For the next lemma recall thatÅ_ϕ=Λ_ϕ(_ϕ∩_ϕ^∗)is the full left Hilbert algebra induced by the weightϕ, the coneC_ϕis the closure of{Δ_ϕ^1/4a| a∈Å_ϕ,0≤π_l(a)≤ 1}, andM_ϕdenotes the centralizer ofϕ.
Let M be a von Neumann algebra, ϕ a normal semi-finite faithful weight on M and x∈ M_ϕ be positive and invertible. Let ψ=ϕ(x^1/2· x^1/2).
If is a modular (completely) Dirichlet form on L^2(M) with respect to Å_ϕ, x()⊂() and (xa,b)=(a,xb) for all a,b∈(), then is also a modular (completely) Dirichlet from with respect to Å_ψ.
Since x is invertible, the weight ψ is faithful and J_ψ=J_ϕ, and since x commutes with (Δ_ϕ^it), we have Δ_ψ^it=x^itΔ_ϕ^it(·)x^-it=Δ_ϕ^it(x^it· x^-it).
Let A be the positive self-adjoint operator associated with and T_t=e^-tA. The commutation relation x()⊂() and (xa,b)=(a,xb) for a,b∈() implies that x commutes strongly with A^1/2. Hence T_t(xa)=x T_t(a) for all a∈ H and t≥ 0. Since T_t commutes with J_ϕ, we also have T_t(ax)=T_t(a)x for a∈ H, t≥ 0. In particular, (T_t) and (Δ_ψ^is) commute.
Moreover, C_ψ=x^1/4C_ϕ x^1/4. Indeed, a direct computation shows that _ψ=_ϕ and Λ_ψ(y)=Λ_ϕ(y)x^1/2 for y∈_ϕ. Hence, if y∈_ϕ with 0≤ y≤ 1, then
Δ_ψ^1/4Λ_ψ(y)=x^1/4Δ_ϕ^1/4Δ_ϕ(y)x^1/4∈ x^1/4C_ϕ x^1/4.
The converse inclusion follows by swapping the roles of ϕ and ψ.
Therefore, if a∈ C_ψ, then
T_t(a)=x^1/4T_t(x^-1/4a x^-1/4)x^1/4∈ C_ψ.
Thus is a Dirichlet form with respect to Å_ψ by <cit.>. The result for completely Dirichlet forms follows easily by applying the same argument to the forms ^(n) on L^2(M⊗ M_n()).
Let M be a von Neumann algebra and ϕ a normal semi-finite faithful weight on M. Let (M_n) be an increasing sequence of von Neumann subalgebras with weak^∗ dense union and assume that M_n is the range of a ϕ-preserving conditional expectation E_n on M. Let H_n denote the closure of Λ_ϕ(_ϕ∩ M_n) and let P_n denote the orthogonal projection from H to H_n.
If is a closed densely defined quadratic form on H such that for every n∈ the quadratic form |_H_n is a Dirichlet form with respect to Λ_ϕ(_ϕ∩_ϕ^∗∩ M_n) and ∘ P_n≤, then is a Dirichlet form with respect to Å_ϕ.
Let (T_t) be the strongly continuous semigroup associated with . Since ∘ P_n≤, we have T_t(H_n)⊂ H_n by Ouhabaz' theorem <cit.>. Thus T_t commutes with P_n, and it is easy to see that (T_t P_n) is the semigroup associated with |_H_n, viewed as semigroup on H. In particular, T_t P_n J_ϕ=J_ϕ T_t P_n. In the limit we obtain T_t J_ϕ=J_ϕ T_t.
It remains to show that T_t(C_ϕ)⊂ C_ϕ for all t≥ 0. A direct computation shows that E_n and P_n are related by P_nΛ_ϕ(x)=Λ_ϕ(E_n(x)). Moreover, E_n is GNS-symmetric with respect to ϕ, which implies that P_n commutes with (Δ_ϕ^it). Thus P_n(C_ϕ) is the closure of {Δ_ϕ^1/4Λ_ϕ(x)| x∈_ϕ∩_ϕ^∗∩ M_n, 0≤ x≤ 1}. In particular, P_n(C_ϕ)⊂ C_ϕ.
Since _n is a Dirichlet form with respect to Λ_ϕ(_ϕ∩_ϕ^∗∩ M_n) and (T_t P_n) is the associated semigroup, we have T_t P_n(C_ϕ)⊂ P_n (C_ϕ). Moreover, since ⋃_n M_n is weak^∗ dense in M, we have P_n→ 1 strongly by Kaplansky's density theorem. Therefore, T_t(C_ϕ)⊂ C_ϕ.
To prove that the quadratic form associated with a closable symmetric derivation is a completely Dirichlet form, we will reduce the problem to the tracially symmetric case by means of Haagerup's reduction method. We only recall the necessary definitions here and refer to <cit.> for proofs in the case of states and to <cit.> for the extension to weights.
LetM=π_l(Å)^'', letϕbe the weight induced by the full left Hilbert algebraÅ^''onM, letG=⋃_n∈2^-n, letM̃=M⋊_σ^ϕG=π_l(Å̃)^''and letϕ̃be the dual weight ofϕonM̃. Let(a_n)be a sequence of self-adjoint elements ofL(G)⊗ 1⊂𝒵(M̃_ϕ̃),ϕ_n=ϕ e^-a_n,M_n=M̃_ϕ_nandτ_n=ϕ_n|_M_n. HereN_ψdenotes the centralizer of the weightψonNand𝒵(M)is the center of the von Neumann algebraN.
By <cit.> the sequence(a_n)can be chosen such that
* M_n is semi-finite with normal semi-finite faithful trace τ_n,
* for each n∈ there exists a conditional expectation E_n from M onto M_n such that ϕ̃∘ E_n=ϕ̃ and σ^ϕ̃_t∘ E_n=E_n∘σ^ϕ̃_t for all t∈,
* E_n(x)→ x strongly^∗ for every x∈ M.
In the following we fix a sequence(a_n)with these properties. The concrete construction is irrelevant for our purposes.
Let Å be a Tomita algebra with completion , let $̋ be a normal Tomita bimodule overÅandδÅ→$̋ a closable symmetric derivation. The closure of the quadratic form
→ [0,∞], a↦δ(a)_^̋2 if a∈Å,
∞ otherwise
is a modular completely Dirichlet form with respect to Å^''. If moreover Å is unital, then is a modular quantum Dirichlet form.
We continue to use the notation from the previous discussion. The derivation δ̃Å̃→̋̃ is a restriction of 1⊗δ̅. Let denote the closure of the quadratic form
ℓ^2(G;)→ [0,∞], a↦δ̃(a)_̋̃^2 if a∈Å̃,
∞ otherwise.
It is clear that (a)=(1⊗δ̅)(a)_ℓ^2(G;̋̅)^2 for a∈(). Furthermore, ()=(1⊗δ̅) and the strongly continuous semigroups (T_t) and (T̃_t) associated with and , respectively, are related by T̃_t=𝕀_ℓ^2(G)⊗ T_t.
The map ι→ℓ^2(G;), a↦_0⊗ a is an isometric embedding such that ι(C_Å^'')= C_Å̃^''∩ι(). Thus, if is a (completely) Dirichlet form with respect to Å̃^'', then is a (completely) Dirichlet form with respect to Å^''.
Since M is in standard form on and M̃ is in standard form on ℓ^2(G;), these spaces can be canonically identified with L^2(M,ϕ) and L^2(M̃,ϕ̃), respectively, and we will tacitly do so in the following. Under these identifications, Å^''=Å_ϕ, Å̃^''=Å_ϕ̃ and Δ_ϕ̃^it=𝕀_ℓ^2(G)⊗Δ_ϕ^it.
Let
𝒜_n={x∈_ϕ̃∩_ϕ̃^∗∩ M_n|Λ_ϕ̃(x e^-a_n/2)∈Å̃}.
Since e^a_n/2∈M̃_ϕ̃, if x∈𝒜_n, then
Λ_ϕ̃(x)=Λ_ϕ̃(x e^-a_n/2)e^a_n/2∈Å̃
by Lemma <ref>. Reversing the roles of e^-a_n/2 and e^a_n/2 we get
𝒜_n={x∈_ϕ̃∩_ϕ̃^∗∩ M_n|Λ_ϕ̃(x)∈Å̃}.
Since Å̃ is a Tomita algebra, it follows easily that 𝒜_n is a ∗-algebra. Define an 𝒜_n-𝒜_n-bimodule structure on ℓ^2(G;̋̅) by
xξ y=Λ_ϕ̃(x)·ξ·Λ_ϕ̃^'(y).
Using that ̋̃ is a Tomita bimodule over Å̃, it is not hard to see that this left and right action are contractive (anti-) ∗-homomorphisms. Moreover, extends to an anti-unitary involution on ℓ^2(G;̋̅) intertwining the left and right action. We still denote this extension by .
Let
∂_n𝒜_n→ L^2(M̃,ϕ̃), ∂_n(x)=δ̃(Λ_ϕ̃(xe^-a_n/2)).
Since e^-a_n/2∈M̃_ϕ̃ and x∈M̃_ϕ_n, we have
∂_n(x^∗) =δ̃(Λ_ϕ̃(x^∗ e^-a_n/2))
=δ̃(Λ_ϕ̃^'( e^-a_n/2x^∗))
=δ̃(J̃Λ_ϕ̃(xe^-a_n/2))
=δ̃(Λ_ϕ̃(xe^-a_n/2))
=∂_n(x).
Moreover, it follows from Lemma <ref> combined with e^-a_n/2∈M̃_ϕ̃ and x,y∈M̃_ϕ_n that
∂_n(xy) =δ̃(Λ_ϕ̃(xye^-a_n/2))
=Λ_ϕ̃(x)·δ̃(Λ_ϕ̃(ye^-a_n/2))+δ̃(Λ_ϕ̃(x))·Λ_ϕ̃(y e^-a_n/2)
=Λ_ϕ̃(x)·δ̃(Λ_ϕ̃(ye^-a_n/2))+δ̃(Λ_ϕ̃(xe^-a_n/2))·Λ_ϕ̃(e^a_n/2 y e^-a_n/2)
=Λ_ϕ̃(x)·δ̃(Λ_ϕ̃(ye^-a_n/2))+δ̃(Λ_ϕ̃(xe^-a_n/2))·Λ_ϕ̃^'(σ^ϕ_n_-i/2(y))
=Λ_ϕ̃(x)·δ̃(Λ_ϕ̃(ye^-a_n/2))+δ̃(Λ_ϕ̃(xe^-a_n/2))·Λ_ϕ̃^'(y)
=x∂_n(y)+∂_n(x)y.
The operator ∂_n is closable when viewed as operator in L^2(M_n,τ_n) since δ̃ is closable and the map Λ_τ_n(x)↦Λ_ϕ̃(xe^-a_n/2) extends to an isometry ι_n from L^2(M_n,τ_n) to L^2(M̃,ϕ̃).
Since τ_n is a trace, <cit.> implies that the closure Q_n of the quadratic form
L^2(M_n,τ_n)→ [0,∞], a↦∂_n(x)^2 if a=Λ_τ_n(x), x∈𝒜_n,
∞ otherwise
is a completely Dirichlet form.
Let H_n=Λ_ϕ̃(_ϕ̃∩_ϕ̃^∗∩ M_n) and let _n be the closure of the quadratic form
H_n→ [0,∞], a↦δ̃(a)^2 if a∈Å̃,
∞ otherwise.
In other words, _n=Q_n∘ι_n^-1.
Note that ι_n maps {Λ_τ_n(x)| x∈_ϕ̃∩_ϕ̃^∗∩ M_n, 0≤ x≤ 1} onto {Λ_ϕ̃(x e^-a_n/2)| x∈_ϕ̃∩_ϕ̃^∗∩ M_n, 0≤ x≤ 1}. Since ϕ_n is a trace on M_n, the latter set coincides with {Λ_ϕ_n(x)| x∈Å_ϕ_n∩ H_n, 0≤ x≤ 1}. It follows that _n is a completely Dirichlet form with respect to Å_ϕ_n∩ H_n.
Moreover,
_n(Δ_ϕ_n^it a) =_n(e^-i a_n/2 t(Δ_ϕ̃^it a)e^ia_n/2 t)
=e^-ia_n/2 t(_tδ̃(a))e^ia_n/2 t^2
=δ̃(a)^2
=_n(a)
for a∈Å̃. This can easily be extended to the closure so that _n is a modular completely Dirichlet form.
By Lemma <ref> we have e^-a_n/2(_n)⊂(_n) and _n(e^a_n/2a,b)=_n(a,e^-a_n/2b) for a,b∈(_n). Furthermore, e^-a_n/2∈M̃_ϕ̃. Hence _n is also a modular completely Dirichlet form with respect to Å_ϕ̃∩ H_n=Λ_ϕ̃(_ϕ̃∩_ϕ̃^∗∩ M_n) by Lemma <ref>.
Let P_n denote the orthogonal projection from ℓ^2(G;) onto H_n. By definition, |_H_n=_n. To apply Lemma <ref>, we have to check that ∘ P_n≤.
Let (T_t) be the strongly continuous semigroup associated with . As discussed above, (𝕀_ℓ^2(G)⊗ T_t) is the strongly continuous semigroup associated with . The modular group of ϕ_n is given by Δ_ϕ_n^it=e^-ita_n(𝕀_ℓ^2(G)⊗Δ_ϕ^it)(·)e^ita_n. Since (T_t) commutes with (Δ_ϕ^it) and e^a_n∈ L(G)⊗ 1_, the semigroup (𝕀_ℓ^2(G)⊗ T_t) commutes with (Δ_ϕ_n^it).
Since M_n is the centralizer of ϕ_n, the subspace H_n is the fixed-point set of (Δ_ϕ̃_n^it). In particular, (𝕀_ℓ^2(G)⊗ T_t)(H_n)⊂ H_n. From Ouhabaz's theorem <cit.> we deduce ∘ P_n≤.
Now Lemma <ref> shows that is a modular completely Dirichlet form with respect to Å_ϕ̃.
If Å is unital with unit 1_Å, then the left and right action of Å on $̋ are unital since they are non-degenerate by definition. Thus
δ(1_Å)=1_Å·δ(1_Å)+δ(1_Å)· 1_Å-δ(1_Å)=0
and hence(1_Å)=0. ThusT_t(1_Å)=0, which implies thatis a quantum Dirichlet form.
In the situation of the previous theorem, we callthe completely Dirichlet form associated with δ.
If Å is not unital, the completely Dirichlet form associated with a derivation is not necessarily a quantum Dirichlet form, even in the commutative case. For example, this is the case for the standard Dirichlet energy (f)=∫_Ω∇ f^2 with domain H^1_0(Ω)∩ L^∞(Ω) if Ω is a bounded Lipschitz domain.
If Å is a unital Tomita algebra, $̋ a normal Tomita bimodule overÅandδÅ→$̋ a closable symmetric derivation with associated completely Dirichlet form , then the first-order differential calculus associated with is a corestriction of (^̋a,δ̅|_Å_). In particular,
δ̅(ab)=aδ̅(b)+δ̅(a)b
for a,b∈Å_.
Since Å is unital, is a modular quantum Dirichlet form by <ref>. Let (_̋,δ_) be a first-order differential calculus associated with . By definition, Å⊂Å_⊂Å^'' and the graph norm of δ̅ coincides with the graph norm of δ_ on Å_. Thus π_l(Å)^'' is strong^∗ dense in π_l(Å_)^'' and Å is a core for δ_. It follows that the linear hull of {δ_(a)b| a,b∈Å} is dense in _̋.
Let
Ulin{δ_(a)b| a,b∈Å}→,̋ U(δ_(a)b)=δ(a)b.
By <cit.> the map U is well-defined and extends to an isometric Å_-bimodule map from ̋̅_ to ̋̅ such that U(δ_(a))=δ(a) for a∈Å.
If a∈Å_, let (a_n) be a sequence in Å such that a_n-a_δ̅→ 0. As discussed above, this implies δ_(a_n)→δ_(a). Hence U(δ_(a))=δ̅(a). If a,b∈Å_, then
δ̅(ab) =U(δ_(ab))
=U(aδ_(b)+δ_(a)b)
=a U(δ_(b))+U(δ_(a))b
=aδ̅(b)+δ̅(a)b.
Moreover, δ J=δ can be extended by continuity to δ̅J=δ̅, and
→̋̅, z↦δ̅(U_z a)
is an entire continuation of t↦_t δ̅(a) for a∈Å_ by <cit.>.
Thus δ̅(Å_)⊂^̋a and δ̅ is a symmetric derivation on Å_. The statement now follows from the uniqueness of the first-order differential calculus associated with a modular completely Dirichlet form <cit.>.
The previous result holds more generally with the same proof if Å is not necessarily unital, but the completely Dirichlet form associated with δ is still a quantum Dirichlet form.
In the light of Lemma <ref>, one has Å^δ⊂Å_ in the situation of the previous proposition. It is an interesting question if one always has equality or if different derivations with δ-complete domains can have the same associated completely Dirichlet form.
§ EXAMPLES
In this section we present several classes of derivations that give rise to modular completely Dirichlet forms according to Theorem <ref>. The first three classes of examples concern inner derivations, before we treat derivations arising in non-tracial free probability in Example <ref> and derivations induced by cocycles on locally compact groups in Example <ref>.
Let Å be a Tomita algebra, $̋ a normal Tomita bimodule overÅandξ∈$̋ be a bounded vector. Assume that there exists ω∈ such that _t ξ=e^iω tξ for all t∈.
The map
δÅ→⊕̋,̋ a↦ i(ξ a-aξ,(ξ)a-a(ξ))
is a bounded derivation, and it is symmetric when ⊕̋$̋ is endowed with the involution(η,ζ)↦ (ζ,η)and the complex one-parameter group(e^-iω z_z,e^iω z_z).
It follows that the closure of the quadratic form
Å→ [0,∞), a↦ξ a-aξ_^̋2+(ξ)a-a(ξ)_^̋2
is a (bounded) modular completely Dirichlet form with respect toÅ^''. In the case=̋Å, this was first proven by Cipriani <cit.>. See also <cit.> for arbitrary Tomita bimodules$̋ over Å.
The next example is a (partial) extension of the previous example allowing for vectors implementing the inner derivation that are not necessarily bounded.
Let Å be a Tomita algebra with Hilbert completion . For ξ∈(Δ^1/2) the operator
π_l^0(ξ)Å→, a↦ξ a
is closable since π_l^0(ξ^♯)⊂π_l^0(ξ)^∗. Likewise, if ξ∈(Δ^-1/2), then
π_r^0(ξ)Å→, a↦ aξ
is closable with π_r^0(ξ^♭)⊂π_r^0(ξ)^∗. Hence if ξ∈(Δ^-1/2)∩(Δ^1/2), then π_l^0(ξ)-π_r^0(ξ) is closable with π_l^0(ξ^♯)-π_r^0(ξ^♭)⊂ (π_l^0(ξ)-π_r^0(ξ))^∗.
Now assume that there exists ω∈ such that Δ^itξ=e^iω tξ for all t∈. This implies in particular ξ∈(Δ^-1/2)∩(Δ^1/2).
Similar to the last example, one can turn Å⊕Å into a Tomita bimodule over Å if one equips it with the usual bimodule structure, the involution (η,ζ)↦ (Jζ,Jη) and the complex one-parameter group (e^-iω zU_z,e^iω zU_z).Then the map
δÅ→Å⊕Å, a↦ i(ξ a-aξ,(Jξ)a-a(Jξ))
is a closable symmetric derivation.
Thus the closure of the quadratic form
Å→ [0,∞), a↦ξ a-aξ_^2+(Jξ)a-a(Jξ)_^2
is a modular completely Dirichlet form with respect to Å^''. This result has first been obtained by Cipriani and Zegarlinski <cit.>.
The previous examples require eigenvectors of the modular group to construct a symmetric derivation, which may be hard to find. In the following examples we show that in certain situations one can start with an arbitrary element if one “averages” the action of the modular group to ensure modularity.
Let M be a von Neumann algebra with separable predual. A normal semi-finite weight faithful weight ϕ on M is called integrable<cit.> if
_ϕ={x∈ M: ∫_σ^ϕ_t(x^∗ x) dt exists in the σ-strong topology}
is weak^∗ dense in M.
If ϕ is integrable, the set
Å={Λ_ϕ(x)| x∈ M analytic for σ^ϕ, σ^ϕ_z(x)∈_ϕ∩_ϕ^∗∩_ϕ∩_ϕ^∗ for all z∈}
is a Tomita subalgebra of (Å_ϕ)_0 with Hilbert completion L^2(M) and π_l(Å)^''=M, as can be seen from <cit.> together with a standard mollifying argument.
Let (V_t) be the translation group on L^2(), that is, V_t f(s)=f(s+t), and let L^2()^a be the set of all entire analytic elements for (V_t). Endow L^2()^a⊙Å with the left and right action of Å given by a(f⊗ b)c=f⊗ abc, the complex one-parameter group (V_z⊙ U_z)_z∈ and the involution f⊗ a↦f̅⊗ Ja. It can be checked that this makes L^2()^a⊙Å into a normal Tomita bimodule, which we denote by $̋.
Leta∈(Δ_ϕ^1/2)∩(Δ_ϕ^-1/2)withJa=aand define
δÅ→ L^2(;L^2(M)), δ(b)(s)=(U_-sa)b-b(U_-sa).
We have
δ(U_t b)(s) =(U_-sa)(U_t b)-(U_t b)(U_-sa)
=U_t((U_-(s+t)a)b-b(U_-(s+t)a))
=U_t δ(b)(s+t).
Thusδ∘ U_t=(U_t⊗ V_t)∘δ. In particular,δmaps into^̋a.
It is not hard to check thatδÅ→^̋ais a symmetric derivation. To show closability, first note that for every fixeds∈the mapb↦δ(b)(s)is closable as seen in the previous example. Ifb_n→ 0andδ(b_n)→ξ, then there exists a subsequence such thatδ(b_n_k)(s)→ξ(s)for a.e.s∈. Closability of the mapb↦δ(b)(s)impliesξ(s)=0for a.e.s∈, which proves the closability ofδ.
Thus the closure of the quadratic form
Å→ [0,∞), b↦∫_(U_-sa)b-b(U_-sa)^2 ds
is a modular completely Dirichlet form.
If we drop the assumptionJa=a, a similar argument shows that the closure of the quadratic form
Å→ [0,∞), b↦∫_((U_-sa)b-b(U_-sa)^2+(U_-sJa)b+b(U_-sJa)^2) ds
is a modular completely Dirichlet form with respect toÅ^''.
A similar construction is possible if one starts with a weight with periodic modular group instead of an integrable weight and integrates over a period of the modular group.
The following class of examples of derivations was introduced by Nelson <cit.> in the context of non-tracial free probability.
Let M be a von Neumann algebra, ϕ a normal faithful state on M and B⊂ M a ∗-subalgebra. Let ∂ B→ M⊗ M be a linear map such that
∂(xy)=(x⊗ 1)·∂(y)+∂(x)· (1⊗ y)
for x,y∈ B. Note that Nelson works with M⊗ M^ instead, but under the identification x⊗ y↦ x⊗ y^, the M-bimodules M⊗ M and M⊗ M^ (with the bimodule structure used in <cit.>) are isomorphic.
Let ω∈ and write M_∞ for the set of entire analytic elements for σ^ϕ. Nelson <cit.> calls the map ∂ an e^ω-modular derivation if B⊂ M_∞, B is invariant under σ^ϕ_z for all z∈, ∂(B)⊂ M_∞⊙ M_∞ and
∂(σ^ϕ_z(x))=e^iω z(σ^ϕ_z⊗σ^ϕ_z)(∂(x))
for all x∈ B and z∈.
One example given by Nelson is the free difference quotient from free probability (see <cit.> in the non-tracial case). Given a ∗-subalgebra B of M and an element a∈ M that is algebraically free from B (and a^∗ is algebraically free from a if a≠ a^∗), let
∂_a B[a]→ B[a]⊙ B[a], ∂_a(a)=1⊗ 1, δ|_B=0
(and δ_a(a^∗)=0 if a^∗≠ a). If a is an eigenvector of Δ_ϕ to the eigenvalue e^ω, then ∂_a is an e^ω-modular derivation.
Let us see how an e^ω derivation gives rise to a symmetric derivation in our sense. For x,y∈ M let (x⊗ y)^†=y^∗⊗ x^∗. The conjugate derivation of ∂ is the map
∂̂ B→ M_∞⊙ M_∞, ∂̂(x)=∂(x^∗)^†.
Let Å=Λ_ϕ(B). Since B is consists of the analytic elements for σ^ϕ and is invariant under σ^ϕ_z for z∈, the set Å is a Tomita subalgebra of (Å_ϕ)_0=Λ_ϕ(M_∞).
Let =̋(Λ_ϕ(M_∞)⊙Λ_ϕ(M_∞))^⊕ 2 with left and right action of Å given by
a(ξ_1⊗η_1,ξ_2⊗η_2)b=(aξ_1 ⊗η_1 b,aξ_2 ⊗η_2 b),
involution given by (ξ_1⊗η_1,ξ_2⊗η_2)↦ (Jη_2⊗ Jξ_2,Jη_1⊗ Jξ_1), and complex one-parameter group (_z)=(e^iω zΔ_ϕ⊗ϕ^iz,e^-iω zΔ_ϕ⊗ϕ^iz). One can check that this makes $̋ into a normal Tomita bimodule overÅ.
Let
δÅ→,̋ δ(Λ_ϕ(x))=(Λ_ϕ⊗ϕ(∂(x)),Λ_ϕ⊗ϕ(∂^†(x))).
The product rule for∂and∂̂translate to the product rule forδ, thee^ωmodularity of∂ensuresδ∘Δ_ϕ^iz=_z ∘δand the definition of∂̂andare tailored to guaranteeδ∘ J_ϕ=∘δ.
All of these properties follow by routine calculations, let us just show the product rule (for the first component of)δas illustration. Letδ_1(Λ_ϕ(x))=Λ_ϕ⊗ϕ(∂(x)). By the product rule for∂we have
δ_1(Λ_ϕ(xy)) =Λ_ϕ⊗ϕ((x⊗ 1)∂(y)+∂(x)(1⊗ y))
=Λ_ϕ⊗ϕ((x⊗ 1)Λ_ϕ⊗ϕ(∂(y))+∂(x))(1⊗σ^ϕ_-i/2(y))
=(π_l(Λ_ϕ(x))⊗ 1)δ_1(Λ_ϕ(y))+(1⊗π_r(Λ_ϕ(y)))δ_1(Λ_ϕ(x)).
Thus, ifδis closable, the closure of the associated quadratic form is a completely Dirichlet form with respect toÅ_ϕon the GNS Hilbert spaceL^2(M,ϕ).
To compare that to the result of Nelson, he showed <cit.> that one gets a completely Dirichlet form on the GNS Hilbert spaceL^2(M_ϕ,ϕ)of the centralizerM_ϕofϕ, which is of course a tracial von Neumann algebra.
Our methods allow to extend this result to the “fully” non-tracial setting in that we obtain a modular completely Dirichlet form on the GNS Hilbert space ofMon not just of the centralizer. Note however that Nelson's definition of the mapδbetweenL^2spaces seems slightly different, owing to the use ofM⊗M^instead ofM⊗M.
The last example concerns group von Neumann algebras. The case of discrete groups was treated in <cit.>, but to cover general locally compact groups, possibly non-unimodular, one needs the theory for non-tracial reference weights as developed here.
Let G be a locally compact group with left Haar measure μ and modular function Δ_G. As discussed in <cit.>, the space C_c(G) of compactly supported continuous function on G with the L^2 inner product, the convolution product, the involution f^♯(g)=Δ_G(g)^-1f(g^-1) and the complex one-parameter group U_z f(g)=Δ_G(g)^izf(g) forms a Tomita algebra. We write λ and ρ for the associated left and right action of C_c(G) on L^2(G) and Å_G for the associated full left Hilbert algebra.
Let π be a strongly continuous orthogonal representation of G on the real Hilbert space H. A continuous map b G→ H is called 1-cocycle if b(gh)=b(g)+π(g)b(h) for all g,h∈ G. We extend π to a unitary representation of G on the complexification H^ of H and write ξ↦ξ̅ for the anti-unitary involution induced by H⊂ H^.
On C_c(G;H^) define a left and right action of C_c(G) by
(f∗ξ)(g) =∫_G f(h)π(h)ξ(h^-1g) dμ(h)
(ξ∗ f)(g) =∫_G f(h^-1g)ξ(h) dμ(h),
an anti-unitary involution by (ξ)(g)=-Δ_G(g)^-1/2π(g)ξ(g^-1) and a complex one-parameter group by _z ξ(g)=Δ_G(g)^izξ(g). One can check that C_c(G;H^) with this operations is a Tomita bimodule over C_c(G).
Let
δ C_c(G)→ C_c(G;H^), δ(f)(g)=f(g)b(g).
Using the cocycle property of b, one gets
δ(f_1∗ f_2)(g) =∫_G f_1(h)f_2(h^-1g) dμ(h) b(g)
=∫_G f_1(h)f_2(h^-1g)(π(h)b(h^-1g)+b(h)) dμ(h)
=(f_1∗δ(f_2))(g)+(δ(f_1)∗ f_2)(g).
It is readily verified that δ also satisfies δ∘ J=∘δ and δ∘ U_z=_z∘δ for all z∈. Hence δ is a symmetric derivation. As a multiplication operator, it is clearly closable.
Therefore,
L^2(G,μ)→ [0,∞], (f)=∫_G f(g)^2b(g)^2 dμ(g)
is a modular completely Dirichlet form with respect to Å_G. The associated quantum dynamical semigroup on L(G) is given by
P_t(∫_G x̂(g)λ(g) dμ(g))=∫_G e^-tb(g)^2x̂(g)λ(g) dμ(g).
In this case, complete positivity of P_t also follows directly from Schönberg's theorem as g↦b(g)^2 is a conditionally negative definite function on G.
[article]citetitle#1[article]title#1 |
http://arxiv.org/abs/2307.06025v1 | 20230712091324 | Magnetohydrodynamics simulation of magnetic flux rope formation in a quadrupolar magnetic field configuration | [
"Sanjay Kumar",
"Avijeet Prasad",
"Sushree S. Nayak",
"Satyam Agarwal",
"R. Bhattacharyya"
] | astro-ph.SR | [
"astro-ph.SR",
"physics.plasm-ph",
"physics.space-ph"
] |
MHD simulation of a flux rope in quadrupolar configuration]Magnetohydrodynamics simulation of magnetic flux rope formation in a quadrupolar magnetic field configuration
^1 Department of Physics, Patna University, Patna-80005, India.
^2 Rosseland Centre for Solar Physics, University of Oslo, Postboks 1029 Blindern, 0315 Oslo, Norway.
^3 Institute of Theoretical Astrophysics, University of Oslo, Postboks 1029 Blindern, 0315 Oslo, Norway.
^4Center for Space Plasma & Aeronomic Research,
The University of Alabama in Huntsville,
Huntsville, Alabama 35899, USA.
^5Udaipur Solar Observatory, Physical Research Laboratory, Dewali, Badi Road, Udaipur-313001, India.
^6Discipline of Physics, Indian Institute of Technology, Gandhinagar 382355, India.
[email protected]
03 November 2022
Magnetic flux ropes (MFRs) play an
important role in high-energetic events like solar flares and coronal mass ejections in the solar atmosphere. Importantly, solar
observations suggest an association of some flaring events with quadrupolar
magnetic configurations. However, the formation and subsequent evolution
of MFRs in such magnetic configurations still need to be fully understood.
In this paper, we present idealized magnetohydrodynamics (MHD)
simulations of MFR formation in a quadrupolar magnetic configuration.
A suitable initial magnetic field having a quadrupolar configuration is
constructed by modifying a three-dimensional (3D) linear force-free magnetic
field. The initial magnetic field contains neutral lines, which consist of X-type null points. The simulated dynamics initially demonstrate the oppositely
directed magnetic field lines located across the polarity inversion lines
(PILs) moving towards each other, resulting in magnetic reconnections.
Due to these reconnections, four highly twisted MFRs form over
the PILs. With time, the foot points of the MFRs move towards the X-type
neutral lines and reconnect, generating complex magnetic structures around the neutral lines, thus making the MFR topology more complex in the quadrupolar configuration than those formed in bipolar loop systems.
Further evolution reveals the non-uniform rise of the MFRs.
Importantly, the simulations indicate that the pre-existing X-type null points in magnetic configurations can be crucial to the evolution of the MFRs and may lead to the observed brightenings during the onset of some flaring events in the quadrupolar configurations.
Keywords: Magnetohydrodynamics, Magnetic reconnections, Magnetic Flux ropes, EULAG-MHD.
§ INTRODUCTION
A magnetic flux rope (MFR) is a bundle of twisted magnetic field lines (MFLs) which wind around a given axis <cit.>. MFRs play a vital role in the dynamical
evolution of diverse plasma systems, such as the laboratory, space, and astrophysical plasmas <cit.>. Particularly, in the solar context, MFRs are believed to be the magnetic structures that can lead to eruptive events in the solar atmosphere, such as solar flares, coronal mass ejections (CMEs), and prominence eruptions <cit.>. The twisted field lines of MFRs represent regions of strong currents and facilitate the storage of magnetic energy <cit.>. These eruptive events release the stored magnetic energy in the form of radiation, mass flow, and accelerated charged particles. Thus, it is crucial to investigate the formation and subsequent evolution
of MFRs to gain a comprehensive understanding of solar eruptions.
There are broadly two mechanisms through which an MFR can appear in the solar atmosphere. In the first mechanism, an MFR pre-exists below the solar surface and, due to the magnetic buoyancy, can become unstable and emerge out of the solar surface into the atmosphere
<cit.>. In contrast, the second mechanism advocates the formation of MFR in the solar atmosphere itself by magnetic reconnections (MRs) inside sheared magnetic arcades <cit.>. These reconnection-based studies of MFR formation often rely on the force-free field assumption (i.e., vanishing Lorentz force) of the coronal field.
The initiation of MRs and subsequent MFR formation are governed by prescribed (shearing and/or converging) flows at the bottom boundary. Recently, an alternative approach based on non-force-free fields is also developed that supports the non-zero Lorentz force <cit.>. This approach demonstrates the spontaneous onset of MRs in arcades without any prescribed boundary flow, accounting for the development and subsequent evolution of MFRs.
It is noteworthy that the MFR, once formed, can exist as a stable structure and manifest itself in the form of filament/prominence <cit.>. With time, they can become unstable by ideal magnetohydrodynamics (MHD) instabilities such as the kink instability or the torus instability <cit.> — leading to eruptions. Alternatively, the MFR can remain in the dynamic phase due to repeated MRs, which further add a significant amount of magnetic fluxes in the MFR — causing a rise and expansion of the MFR <cit.>. Such rapidly rising MFRs can play a crucial role in the onset of a solar flare and the subsequent evolution of the eruptions
<cit.>. The numerical studies on MFR formation through MRs so far have been predominantly carried out for the bipolar flux systems <cit.>, while far more complex topologies are observed in the extreme-ultraviolet (EUV) observations of the solar corona. Particularly, some observational studies have revealed flaring regions to be associated with quadrupolar magnetic configurations <cit.>.
Although quadrupolar configurations have been employed in previous works <cit.> to assess the magnetic topology and energy storage in the coronal field, not many studies have been conducted to explore the formation and evolution of MFRs in such configurations.
Using a flux emergence-based model, <cit.> has shown the formation of the complex quadrupolar magnetic structures in the solar corona, whereas <cit.> used a magnetofrictional model for automated detection of magnetic flux ropes and studying MFR formation in the solar corona.
Another recent study by <cit.> examined the evolution of MFRs in quadrupolar configuration and the role of MFR evolution in solar flares utilizing pre-existing twisted magnetic structures emerging from below the photosphere.
In this work, we explore MFR evolution in terms of their formation from quadrupolar magnetic loops and continuous ascent, mediated via magnetic reconnections. We simulate the viscous relaxation of an
incompressible, thermally inactive, and infinitely conducting plasma from an initial non-equilibrium state to ensure the spontaneous onset of MRs in the MHD simulation <cit.>.
The spontaneity of MRs stems from Parker’s magnetostatic theorem <cit.>,
according to which a plasma with infinite electrical
conductivity and complex magnetic topology can not achieve an equilibrium state with a continuous magnetic field. The viscous relaxation is in
harmony with the magnetostatic theorem — leading to the natural generation of MRs <cit.>. The initial non-equilibrium state for the relaxation is achieved through the initial non-force-free field, which provides the Lorentz force that initiates dynamics without needing a prescribed flow. Relevantly, a recent study by <cit.> has suggested the similarity in the magnetohydrodynamics of a coronal transient initiated with force-free and non-force-free extrapolated magnetic fields. Furthermore, in the context of the relaxation theory, the minimum-dissipation-rate (MDR) principle results in non-force-free fields as a relaxed state for externally driven systems <cit.>. This makes the solar corona, which is driven by the photospheric boundary flows, a suitable candidate for the application of the MDR principle <cit.>. The physical scales can become under-resolved during the relaxation constrained by the flux-freezing condition. Our numerical scheme intermittently and adaptively regularises these under-resolved scales by mimicking magnetic reconnections. The simulated evolution documents the onset of magnetic reconnections, which lead to the formation of the MFRs in the quadrupolar magnetic configuration. The MFRs are found to be morphologically more complex than the ones generated in the bipolar magnetic loops. Subsequently, the reconnections continue in time and add more magnetic flux to the MFRs, causing the rise of the MFRs.
The paper is organized as follows. The initial magnetic field is described in Section 2. The governing MHD equations and the numerical model are discussed in Section 3. The simulation results are presented in Section 4. Section 5 summarises these results and discusses the key findings.
§ INITIAL MAGNETIC FIELD
To achieve a suitable initial magnetic field with quadrupolar configuration, we revise the general construction procedure utilized by <cit.>, which was aimed to study the MFR formation in the bipolar magnetic loops. The Cartesian components of the chosen initial magnetic field B are
B_x = 0.5[ α_0 sin( x) cos(y
) exp( -k_0z/s_0) - k_0cos( x) sin( y) exp( -k_0z/s_0) ],
B_y = -0.5[ α_0 cos(x)sin(y
) exp( -k_0z/s_0)
+ k_0sin( x ) cos( y) exp( -k_0z/s_0) ],
B_z = s_0sin( x ) sin( y) exp(-k_0z/s_0),
where α_0, k_0 and s_0 are constants. The constant α_0 and k_0 are related as k_0=√(2-α_0^2). The field B is specified in the positive half-space (z ≥ 0) of a Cartesian domain Γ, which is periodic along the lateral directions (ranging from 0 to 2π in x and y) and open along the vertical direction (ranging from 0 to 6π in z). The initial magnetic field B supports the non-zero Lorentz force for s_0 1, which contributes to the viscous relaxation. For the simulations, we set s_0=6 to have an optimal Lorentz force for generating efficient dynamics with a minimal computational cost. Noticeably, for s_0=1, B satisfies the linear force-free equation ∇×B=α_0 B, where α_0 represents the magnetic circulation per unit flux, and is related to the twist of the corresponding MFLs <cit.>. We set α_0=0.1 to have sufficiently twisted initial MFLs and, hence, complex magnetic topology (as demanded by the magnetostatic theorem). Moreover, the chosen α_0 corresponds to a high value of k_0 that leads to a steeper exponential decay of the initial Lorentz force (see Figure <ref>).
Relevantly, the solar corona is considered to be in the force-free equilibrium state under the low plasma-β approximation <cit.>. On the other hand, the photosphere with plasma-β∼ 1 is expected to be non-force-free due to the convective driving <cit.>.
In Figure <ref>(a), we have shown the
direct volume renderings of the Lorentz force density, which shows the decay of the Lorentz force with height. In this and the subsequent figures, the arrows in the colors red, green, and blue represent the directions x, y, and z, respectively.
To confirm this further, we plot the horizontally averaged Lorentz force density variation with height in Figure <ref>(b). The averaged Lorentz force density falls off sharply with height, such that the upper half of the domain is nearly in a force-free state.
Figure <ref> shows the initial magnetic field configuration. In the figure, the lower boundary is superimposed with the B_z values.
The multiple polarities in the initial field are shown in Figure <ref>(b), where P1 and P2 mark the positive polarities, while N1 and N2 denote the negative polarities.
The polarity inversion lines (PILs) corresponding to the different opposite polarities (P1, N1), (P1, N2), (P2, N1), and (P2, N2) are shown by white lines in the figure. Noticeably, the opposite polarities satisfy mirror symmetry across the PILs. Because of the mirror symmetry, the initial magnetic field supports an X-type neutral line (made of magnetic neutral points with X-type field line geometry) located at (x, y)=(π, π) along z (shown in pink in panel (c)).
As a result, when seen from the top, the initial field line geometry is quadrupolar (panel (d)).
In addition, since the computational domain is periodic along x and y, the initial field also has neutral lines at the domain's boundaries (not shown). The locations of these lines are (x, y)=(0, 0), (0, π), (0, 2π), (π, 0), (2π, 0), (π, 2π), (2π, π), and (2π, 2π) along z axis.
The initial magnetic field with the two positive and negative polarities resembles the observed quadrupolar magnetic configurations at the solar surface <cit.>. However, in the absence of mirror symmetry and periodicity, the observed configurations exhibit a more complex magnetic field line topology.
§ GOVERNING MHD EQUATIONS AND NUMERICAL MODEL
With a focus on exploring the changes in field line topology, here we consider the plasma to be incompressible, thermally inactive, and perfectly conducting <cit.>. The set of dimensionless MHD equations is then given as:
∂v/∂ t
+ (v·∇) v =-∇ p
+(∇×B) ×B+τ_a/τ_ν∇^2v,
∇·v =0,
∂B/∂ t =∇×(v×B),
∇·B =0,
in usual notations. The magnetic field strength B and the plasma velocity v are normalised by the average magnetic field strength (B_0) and the Alfvén speed
(v_a ≡ B_0/√(4πρ_0) with ρ_0 representing the constant mass density), respectively. The plasma pressure p, the spatial-scale L, and the temporal scale t are normalised by ρv_a^2, the size of the system (L_0), and the Alfvénic transit time (τ_a=L_0/v_a), respectively.
In Equation (<ref>), τ_ν represents viscous diffusion time scale
(τ_ν= L_0^2/ν), with ν being the kinematic viscosity.
Notably, the incompressibility (Equation <ref>) leads to the volume-preserving flow, an assumption routinely used in other works <cit.>. While compressibility is essential in exploring the thermodynamics of the coronal plasma <cit.>, in this work, our focus is on the changes in the magnetic topology of the initial quadrupolar magnetic configuration. Furthermore, utilizing the discretized incompressibility constraint, the pressure p satisfies an elliptic boundary value problem on the discrete integral form of the momentum equation (Equation <ref>); cf. <cit.> and the references therein.
For the numerical solutions of the MHD equations, we use the well-established magnetohydrodynamic numerical model EULAG-MHD <cit.>.
The model is an extension of the hydrodynamic model EULAG predominantly used in atmospheric and climate research <cit.>.
Here we discuss only the essential features of the EULAG-MHD code and refer the readers to <cit.> and references therein for detailed discussions.
The model is based on the spatiotemporally second-order accurate non-oscillatory forward-in-time multidimensional positive definite advection transport algorithm, MPDATA <cit.>.
Importantly, MPDATA has the proven dissipative property which, intermittently and adaptively, regularises the under-resolved scales by simulating magnetic reconnections
and mimicking the action of explicit subgrid-scale turbulence models <cit.> in the spirit of
Implicit Large Eddy Simulations (ILES)
<cit.>. Such ILESs conducted with the model have already been successfully utilized to simulate reconnections to understand their role in the coronal dynamics <cit.>. In this work, the presented computation continues to rely on the effectiveness of ILES in regularizing the commencement of magnetic reconnections.
§ SIMULATION RESULTS
The simulations are performed over a grid of 128 × 128 × 384, in x, y, and z directions. The magnetic field given by Equations (<ref>)-(<ref>) (shown in Figure <ref>) serves as the initial magnetic field for the simulations. The initial velocity is set to zero (i.e., v=0).
The boundary condition along z is kept open by continuing the B to the boundary in order to vanish the net magnetic flux passing through it <cit.>. Moreover, the boundaries along x and y are taken to be periodic. The simulations are done for two different viscosities ν=0.005 and 0.01.
The values of the dimensionless constant τ_a / τ_ν in Equation (<ref>) are ≈ 10^-4 and
≈ 10^-3 for the viscosities ν=0.005 and 0.01 respectively, which are larger than its typical coronal value (≈ 10^-5) <cit.>. The higher values of τ_a / τ_ν used in the simulations are only expected to speed up the dynamical evolution and reduce the computational cost without affecting the magnetic topology.
For a general understanding of the simulated viscous relaxation, Figure <ref> shows the time profile of the kinetic (panel (a)) and magnetic energies (panel (b)), normalized to the
initial total (magnetic + kinetic) energy. The black solid and red dashed lines correspond to viscosities ν=0.005 and 0.01, respectively. With an initially vanishing velocity field, the initial Lorentz force triggers the dynamical evolution and generates the plasma flow at the expense of the magnetic energy. The flow is then arrested by the viscous drag, which results in the formation of peaks in kinetic energy plots. Notably, the field lines being frozen into the plasma get deformed by the flow, modifying the initial distribution of the Lorentz force. The peak heights are lower for the simulation with ν=0.01 in comparison to the simulation with ν=0.005, which is expected due to the larger magnitude of the viscous drag force for ν=0.01.
To explore the possibility of flux rope formation, in Figure <ref>, we first describe the evolution of the quadrupolar magnetic field configuration for the
computation with viscosity ν=0.005. The figure further shows the twist parameter, which is calculated by integrating field-aligned current J·B/B^2 along a field line <cit.>. Figure <ref>(b) documents the generation of the twisted MFLs from the initial magnetic loops. These field lines are co-located with the high values of the twist parameter — suggesting the helical nature of the field lines. Hence, the twisted field lines represent magnetic flux ropes located above the PILs. Initially, there are four separate MFRs, situated above the PILs of the different opposite polarities (P1, N1), (P1, N2), (P2, N1), (P2, N2); more clearly shown in Figure <ref>. With time, the magnetic flux of the MFRs increases, accompanied by their rise in the vertical direction (panels (c) and (d) of Figure <ref>). However, the rise of the MFRs is not uniform. Moreover, as the MFRs rise, they appear to interact with the pre-existing X-type neutral lines located at (x, y, z)=(π, π, z), (0, π, z), (π, 0, z), (π, 2π, z), (2π, π, z) and, develop complex magnetic structures around them. The structure is more evident around the X-type neutral line that exists inside the computational domain at (x, y, z)=(π, π, z) as shown in Figure <ref>(c) and marked by a black arrow in Figure <ref>(d). Further, the structures expand and rise with time (panels (e)-(f) of Figure <ref>). Similar evolution is also found for the computation with viscosity ν =0.01 (not shown).
To closely examine the mechanism of the MFR formation, in Figure <ref>, we focus on the early evolution of two sets of the bipolar magnetic loops located over the PIL corresponding to the regions P1 and N2 (Figure <ref>). The figure is further overlaid with the contours of |𝐉|/|𝐁| on a y-constant plane and the Lorentz force (whose direction is represented by the grey-colored arrows). The direction of the Lorentz force is such that the force pushes the two complementary anti-parallel field lines of the different set, located on the opposite sides of the PIL, toward each other (Figures. <ref>(a) and (b)). Consequently, the gradient in B sharply increases, as evident from a co-spatial enhancement in |𝐉|/|𝐁|. These dynamics lead to the onset of reconnections as the scales become under-resolved (Figure <ref>(c)). The reconnections continue in time and are central to developing an MFR over the PIL.
In Figure <ref>, we overplot the two sets with the contours of the logarithm of squashing factor Q on a y-constant plane to support the onset of the reconnections further. The Q-factor is a measure of the gradient in field line mapping of the magnetic field <cit.> and is calculated using the code of
<cit.>. Notable is the generation of high values of ln Q as the oppositely directed field lines approach each other.
The high ln Q is co-spatial with the high |𝐉|/|𝐁| (Figure <ref>) — further supporting the development of the sharp gradient in B and consequent reconnections that lead to the MFR formation. After its formation, the legs of the MFR move towards the X-type neutral lines (panels (c) and (d) of Figures <ref> and <ref>). A similar process of reconnection also takes place over the other PILs located between (P1, N1), (P2, N1), and (P2, N2)-regions and is responsible for the formation of MFRs over the PILs (not shown).
In Figure <ref>, we plot the top view of the time evolution of the magnetic field lines near the X-type neutral line located inside the central region of the domain at (x, y, z)=(π, π, z). The figure illustrates that, as the legs of the four MFRs approach the X-type neutral line, they start to reconnect at the neutral line and develop the complex magnetic structure around the neutral line. As these reconnections repeat, more and more magnetic flux is sucked into the central region — shaping the structure and, ultimately, expanding it. Formation of the complex structures through similar dynamical evolution is also observed around the other neutral lines situated at the boundaries of the domain with the coordinates (x, y, z)=(0, π, z), (π, 0, z), (π, 2π, z), (2π, π, z) (not shown).
Further, the outflows generated from the repeated reconnections occurring below the flux ropes push them in the vertical direction and lead to the ascent of the flux ropes. Notable is the non-uniform ascent of the flux ropes.
The flux ropes show a faster rise and expansion near the neutral lines than away from them (see Figure <ref>). To explore this non-uniform ascent, in Figure <ref>, we plot the evolution of the flux ropes overlaid with the Lorentz force (represented by the grey-colored arrows). The direction and length of the arrows mark the direction and magnitude of the Lorentz force. The figure shows that the Lorentz force is almost negligible around the X-type neutral line (over the complex magnetic structure), while the force is vertically downward away from the neutral line.
Moreover, the magnitude of the vertically downward force appears to increase with time.
As a result, the overlying Lorentz force does not affect the rise of the flux ropes in the vicinity of the neutral line, while the vertically downward force restricts the rise of the ropes away from the neutral line. This result suggests that the non-uniform rise of the ropes is related to the non-uniform distribution of the overlying Lorentz force.
To show the convergence of the simulation results with increasing resolution, we have also performed an additional higher resolution simulation over a grid of 160 × 160 × 480, in x, y, and z directions with viscosity ν=0.01. Figure <ref> illustrates the time evolution of the two sets of the bipolar magnetic loops in the same sub-domain as shown in Figure <ref>. Moreover, similar to Figure <ref>, Figure <ref> is also overplotted with the contours of |𝐉|/|𝐁| on a y-constant plane and the Lorentz force (whose direction is marked by the grey-colored arrows). For the higher resolution simulation, the maximum value of |𝐉|/|𝐁| is increased (see panel (a) of Figures <ref> and <ref>) — agreeing with the general understanding that the gradient in the magnetic field enhances with an increase in grid resolution. The field line evolution in Figure <ref> is geometrically identical to the ones depicted in Figure <ref> — confirming the convergence of the results with increasing resolution. Moreover, Figure <ref> documents a delay in the onset of the reconnection, with an increase in grid resolution. The delay is expected as the under-resolved scales develop later in time with an increased resolution.
§ SUMMARY
The presented MHD simulations explore the magnetic flux rope formation process in the presence of a quadrupolar magnetic field topology. The initial non-force-free magnetic field is constructed analytically by modifying a three-dimensional linear force-free field having field line geometry similar to the observed coronal loops. The configuration consists of two positive polarity regions, P1 and P2, and two negative polarity regions, N1 and N2. Because of these different polarity regions, the initial field has an X-type neutral line inside the computational domain. In addition, X-type neutral lines also reside at the domain's boundaries due to the periodicity in the lateral directions. Furthermore, the field supports Lorentz force, which naturally generates the simulated dynamics from an initial motionless state. The plasma evolution is idealized to be viscid, incompressible, and thermally homogeneous. Furthermore, the locally adaptive dissipation of the MPDATA scheme mimics the magnetic reconnections in response to the development of the under-resolved scales.
In the simulations, we first notice the movement of oppositely directed field lines of bipolar loops located above the PILs (separating the different polarity regions) towards each other. This evolution leads to a steep enhancement in the magnetic field gradient and the generation of under-resolved scales. Consequently, repetitive reconnections initiate and account for the formation of flux ropes over the PILs. In addition, the outflow generated by these reconnections pushes the flux ropes in the upward direction and contributes to their ascent.
With time, the legs of the magnetic flux ropes move toward the X-type neutral lines and start reconnecting at the line — leading to the generation of complex magnetic structures around the neutral lines. Hence, the simulations demonstrate the topologically complex MFR evolution in the quadrupolar configuration than those found in bipolar magnetic loops <cit.>, which is a key finding of the paper.
Furthermore, the rise of the flux ropes is found to be non-uniform. The flux rope near the neutral lines exhibits a faster rise than away from the neutral lines. The non-uniform rise is attributed to the non-uniform generation of the overlying Lorentz force. The Lorentz force is negligible near the neutral lines and enhances with a vertically downward direction as the distance from the neutral lines increases.
Overall, the reported simulations identify spontaneous
repeated magnetic reconnections as the initial driver for the flux rope formation and triggering its ascent in the quadrupolar magnetic configuration. Notably, the ascent of the flux ropes having underlying reconnections is in harmony with contemporary observations at multiple channels (hard X-ray and extreme ultraviolet) that reveal intense localized brightenings below a rising flux rope <cit.>. More importantly, the pre-existing X-type neutral line and the newly formed flux ropes find morphological similarity to the brightenings observed in extreme ultraviolet images during the flaring events in the quadrupolar magnetic configurations (cf. <cit.>).
The similarity indicates that the pre-existing X-type neutral points as well as the formation of the flux ropes in the magnetic configurations can play a crucial role in initiating the flaring events.
However, the presented simulations, idealized with an analytically constructed initial field and the assumed incompressibility, are only partly conclusive.
Therefore, we set a future goal to perform a fully compressible MHD simulation initiated by the extrapolated coronal magnetic field with the observed quadrupolar magnetic configuration, which is expected to provide more realistic coronal dynamics and can be directly compared with solar observations.
§.§ Data availability statement
The datasets generated for this study are available on request to the corresponding author.
§.§ Acknowledgments
We acknowledge the visualization software VAPOR (www.vapor.ucar.edu) for generating relevant graphics.
AP would also like to acknowledge the support of the Research Council of Norway through its Centres of Excellence scheme, project number 262622, and Synergy Grant number 810218 459 (ERC-2018-SyG) of the European Research Council. AP also acknowledges partial support from NSF award AGS-2020703. SSN acknowledges the NSF-AGS-1954503 and NASA-LWS-80NSSC21K0003 grants.
jphysicsB
|
http://arxiv.org/abs/2307.04572v1 | 20230710140659 | M1 neutrino transport within the numerical-relativistic code BAM with application to low mass binary neutron star mergers | [
"Federico Schianchi",
"Henrique Gieg",
"Vsevolod Nedora",
"Anna Neuweiler",
"Maximiliano Ujevic",
"Mattia Bulla",
"Tim Dietrich"
] | gr-qc | [
"gr-qc",
"astro-ph.HE"
] |
^1Institut für Physik und Astronomie, Universität Potsdam, Haus 28, Karl-Liebknecht-Str. 24/25, 14476, Potsdam, Germany
^2Centro de Ciências Naturais e Humanas, Universidade Federal do ABC, 09210-170, Santo André, São Paulo, Brazil
^3Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Am Mühlenberg 1, Potsdam 14476, Germany
^4Department of Physics and Earth Science, University of Ferrara, via Saragat 1, I-44122 Ferrara, Italy
^5INFN, Sezione di Ferrara, via Saragat 1, I-44122 Ferrara, Italy
^6INAF, Osservatorio Astronomico d’Abruzzo, via Mentore Maggini snc, 64100 Teramo, Italy
Neutrino interactions are essential for an accurate understanding of the binary neutron star merger process. In this article, we extend the code infrastructure of the well-established numerical-relativity code BAM that until recently neglected neutrino-driven interactions. In fact, while previous work allowed already the usage of nuclear-tabulated equations of state and employing a neutrino leakage scheme, we are moving forward by implementing a first-order multipolar radiation transport scheme (M1) for the advection of neutrinos.
After testing our implementation on a set of standard scenarios, we apply it to the evolution of four low-mass binary systems, and we perform an analysis of ejecta properties. We also show that our new ejecta analysis infrastructure is able to provide numerical relativity-informed inputs for the codes and , for the computation of kilonova lightcurves and nucleosynthesis yields, respectively.
M1 neutrino transport within the numerical-relativistic code BAM with application to low mass binary neutron star mergers
Tim Dietrich^1,3
August 12, 2023
=========================================================================================================================
§ INTRODUCTION
Simulations of binary neutron star (BNS) mergers are a fundamental tool to support interpretations of multimessenger observations combining gravitational waves (GWs) and electromagnetic (EM) signals produced by the same transient event, allowing, among others, the study of matter at supranuclear densities e.g.,<cit.>, the expansion rate of the Universe <cit.>, and the production of heavy elements, e.g. <cit.>.
The strong interest in BNS mergers is
partially caused by the myriad of observational data that has recently become available with the detection of the GW signal GW170817 <cit.> by advanced LIGO <cit.> and advanced Virgo <cit.> and its associated EM counterparts: the kilonova AT2017gfo <cit.> and the short -ray burst GRB170817A <cit.>, with long-lived signatures of its afterglow <cit.>.
With the start of the O4 observation run of the LIGO-Virgo-Kagra collaboration in May 2023, more events of this kind are expected to be detected, e.g., <cit.>.
In the neutron-rich matter outflow, r-process nucleosynthesis can set in, which can power transient EM phenomena due to heating caused by radioactive decay of newly synthesized nuclei in a wide range of atomic numbers <cit.>, corroborating the hypothesis that kilonovae are connected to the production of heavy nuclei <cit.>.
Analysis of AT2017gfo has shown that kilonovae can consist of multiple components, each one generated by ejecta with different electron fractions and entropy <cit.>.
Studies of ejecta based on numerical-relativity (NR) simulations of BNS mergers suggest that the properties of the ejecta depend on the different ejection mechanisms during and after the merger, e.g., Refs. <cit.>.
Most NR simulations of BNS mergers are relatively short (≤100ms after the merger) and thus provide information on the early time, dynamical ejecta, which is generally divided into a tidal component (driven by tidal torques) and a shocked component (driven by shocks launched during NS core bounces)
<cit.>. In equal-mass mergers, the shocked component is found to be up to a factor
∼10 more massive than the tidal one <cit.>.
However, the dynamical ejecta found in NR simulations cannot account alone for the
bright blue and late red components of the observed kilonova in AT2017gfo <cit.>.
Winds powered by neutrino absorption and angular momentum transport can unbind 𝒪(0.1 M_⊙) from the disk surrounding the remnant on timescales of 𝒪(0.1-1 s) and could (if present) give the largest contribution to the kilonova signal <cit.>. Until recent years, these winds have been mostly studied by means of long-term simulations of neutrino-cooled disks <cit.>.
Ab-initio NR simulations of the merger with advanced neutrino-transport and
magnetohydrodynamics were not yet fully developed at sufficiently long timescales <cit.>, but large progress has been made recently, e.g., <cit.>.
Additionally, shorter (up to 100 ms post-merger) NR simulations
pointed out the existence of moderately neutron-rich spiral-wave wind
that is sufficiently massive and fast to contribute to the early blue kilonova emission <cit.>.
Another contribution to post-merger ejecta can come from neutrino-driven winds that can lead to ∼ 10^-4-10^-3M_⊙ ejecta with high electron fraction <cit.>.
To perform multimessenger analyses of future GW and EM detections associated with BNS mergers, NR simulations, including microphysical modeling, are essential. In particular, for the estimation of nucleosynthetic yields and kilonova light curves, it is important to account for the interaction of nuclear matter with neutrinos. This is because neutrino emission and absorption are responsible for determining the electron fraction of the ejecta, which influences kilonova light curves and nucleosynthesis strongly.
In the past, several attempts were made to map ejecta properties to binary parameters like deformability and mass ratio, e.g., <cit.>, with the aim of building phenomenological fits for Bayesian analysis of kilonova light curves and GW signal simultaneously. These studies showed that neutrino radiation treatment plays an important role in determining the mass, composition, and geometry of the ejecta. The extension and improvements of such fits with new data require the use of an advanced scheme to include neutrino radiation.
The first attempt to include neutrino interactions in a BNS merger simulation was made more than 20 years ago in <cit.> by means of a neutrino leakage scheme (NLS). NLS employs an effective neutrino emissivity assigned to each fluid element according to its thermodynamical configuration and the optical depth of the path from it to infinity. This effective emission represents the rate of neutrino energy/number that escapes a fluid element. Hence, NLS is limited to model neutrino cooling. Unfortunately, this quantity is only known in the diffusive and free-streaming regimes, and phenomenological interpolation is used for gray zones. The main issue of NLS is the fact that, by neglecting the neutrino heating and pressure on the nuclear matter, it leads to a significant underestimation of the ejecta's electron fraction <cit.> and affects the matter dynamics. The more recent development of an advanced spectral leakage (ASL) scheme tried to solve this issue by phenomenological modeling of neutrino flux anisotropies <cit.>.
A more accurate theoretical approach to incorporate neutrino effects would require evolving the neutrinos distribution function according to the General Relativistic Boltzmann equation <cit.>. In principle, it is possible to follow this approach in a conservative 3+1 formulation, e.g., <cit.>. However, since the distribution function is defined in the 6+1-dimensional one-particle phase space, the computational cost of such an approach is prohibitive. Therefore, in recent years, more computationally efficient neutrino radiation transport approaches have become increasingly popular. Amongst them is the so-called moment scheme, which is based on a multipolar expansion of the moments of the radiation distribution function <cit.>. The 3+1 decomposition of such a formalism has been first studied in <cit.>. The basic idea of this framework is to dynamically evolve the distribution function of neutrino intensity in a base of multipoles up to a certain rank and evolve them as field variables. Most of the radiation transport codes used in NR consider the transport of the zeroth and first-rank moments, thus referred to as M0 scheme <cit.> or M1 moments scheme <cit.>. It is worth noting that the aforementioned M1 implementations rely on the grey approximation, i.e., the considered moments are frequency integrated. This description makes the computation significantly less expensive but less accurate regarding the matter-neutrino interaction rates, which are strongly dependent on the neutrino energy <cit.>. We want to point out that not only in BNS simulations but also in core-collapse supernovae and disk simulations, multipolar radiation transport schemes are regularly used. In most cases, even in more sophisticated versions, like non-gray, energy-dependent schemes, e.g., <cit.>.
One important artifact of multipolar radiation transport schemes is the well-known unphysical interaction of crossing beams <cit.>, which is due to the inability of the M1 scheme to treat higher-order moments of the distribution. The crossing beams and energy-dependent interaction rates issues are both cured by Monte-Carlo radiation transport. In the latter, radiation is modeled by an arbitrary number of neutrino packets, each one with its own energy and momentum. This scheme has recently been adapted to NR simulation <cit.>.
Another scheme that can, in principle, solve these issues is the relativistic Lattice-Boltzmann <cit.>, where the momenta component of the Boltzmann equation is solved in every space point on a discretized spherical grid in order to model the transport even in presence of higher order momenta. Overall, we refer to <cit.> for a detailed review of neutrino transport methods.
Among all the mentioned schemes, we decided to implement M1 transport because of its ability to treat neutrino heating and pressure with a reasonable computational cost and for being well-tested in NR simulations of BNS mergers.
This article is structured as follows: In Sec. <ref>, we recap the governing equations of General Relativistic Radiation Hydrodynamics (GRRHD) and M1 transport. In Sec. <ref>, we discuss the numerical methods used to integrate M1 transport equations, paying particular attention to the stiff source terms and the advection of radiation in the trapped regime. In Sec. <ref>, we show the results of the tests we performed to validate the code in different regimes. In Sec. <ref>, we present the application of our newly developed code to the merger of binary neutron stars with two different EoSs and mass ratios, with a description of ejecta geometry, neutrino luminosity, GW signal, nucleosynthesis yields, and kilonova light curves.
Throughout this article, we will use the Einstein notation for index summation with the (-,+,+,+) signature of the metric and (unless differently specified) geometric units, i.e., G=c=M_⊙=1. Also, the Boltzmann constant is κ_B = 1.
§ GOVERNING EQUATIONS
§.§ 3+1-Decomposition and spacetime evolution
The spacetime dynamics is considered by numerically solving Einstein's field equations in 3+1 formulation, for which the line element reads
ds^2 = -α^2dt^2 + (dx^i + β^i dt)(dx^j + β^j dt) γ_ij,
where α is the lapse function, β^i is the shift vector, and γ_ij is the 3-dimensional spatial metric (or 3-metric) induced on the 3-dimensional slices of the 4-dimensional spacetime, identified by t = constant.
The 3-metric is given by
γ_αβ = g_αβ + n_αn_β,
where g_αβ is the 4-dimensional spacetime metric and n^α is the timelike, normal vector field. By construction, n^α is future-directed, normal to each point of a given t = constant slice and normalized to n^μ n_μ = -1.
In the coordinate system given by the line element of Eq. (<ref>), the normal vector field has components
n^α = ( 1/α, -β^k/α), n_α = ( -α, 0 ).
In this framework, the BAM code <cit.> can solve for the Einstein field equations. In this work, we do so using the Z4c formulation with constraint damping terms <cit.> as implemented in <cit.>.
§.§ General Relativistic Hydrodynamics
As in the previous version of BAM that included a neutrino leakage scheme <cit.>, we solve general-relativistic radiation hydrodynamics equations arising from the conservation of stress-energy tensor of matter with source terms representing neutrino interactions. Furthermore, the conservation of baryon number and transport of electron fraction lead to:
∇_μ (ρ u^μ) = 0,
∇_μ T_ matter^μν = -S^ν,
∇_μ (ρ Y_ e u^μ) = m_ bℛ,
with ρ being the rest mass density, T^ matter_μν the stress energy tensor of matter, u^μ its four-velocity, m_ b the baryon mass, and Y_ e = n_ p/n_ b = (n_ e^- - n_ e^+)/n_ b the electron fraction. n_ e^-, n_ e^+, n_ p, and n_ b are the number densities of electrons, positrons, protons, and baryons, respectively. The source terms S^μ and ℛ represent the interaction of the fluid with neutrinos, i.e., neutrino cooling and heating, and the lepton number deposition rate, respectively.
We employ the usual decomposition of the fluid's 4-velocity as follows:
u^α = W(n^α + v^α), n^αv_α=0,
with W = -n^αu_α=1/√(1-v^iv_i), being the Lorentz factor.
Assuming matter to be an ideal fluid, its stress-energy tensor can be decomposed as
T_ matter^μν = ρ h u^μu^ν + pg^μν,
where h and p are the specific enthalpy and the pressure of the fluid, respectively. Equations (<ref>), (<ref>), and (<ref>) are expressed as conservative transport equations following the standard Valencia formulation <cit.> as already implemented in previous versions of BAM, e.g., <cit.>, which is based on the evolution of the conservative variables
D = √(γ) W ρ,
τ = √(γ) (W^2 h ρ - p) - D,
𝒮_i = √(γ) W^2 h ρ v_i,
D_Y = √(γ) W ρ Y_e.
As an upgrade, in comparison to our previous implementation, we modified the source terms S^μ and ℛ to adapt them to the new neutrino scheme, as we will discuss in the following.
§.§ Multipolar formulation for radiation transport
In this article, we implement a first-order multipolar radiation transport scheme following the formulation of Ref. <cit.>. The multipolar formulation was originally developed to reduce the dimensionality of the general-relativistic Boltzmann equation for the neutrino distribution function in the phase space
df(x^μ,p^μ)/dl = S_ coll(x^μ,p^μ,f),
with f being the distribution of neutrinos, l being the proper length traveled by neutrinos in a fiducial observer frame, and S_ coll being a collisional term that takes into account the interaction of neutrinos with matter, i.e., emission, absorption, and scattering. The derivative d/dl is along the trajectory in the phase space of the neutrinos, so it will have a component in physical spacetime and one in momentum space. Since neutrinos are assumed to travel on light-like geodesics, their momentum has to satisfy the constraint p^αp_α=0. This reduces the dimensionality of the problem by one and allows us to describe the 4-momentum via the variables Ω and ν, representing the space direction of particles on a solid angle and their frequency in the fiducial observer frame. To make the evaluation of collisional sources easier, we chose the fluid frame as the fiducial frame. In the following text, ν will always be the frequency of neutrinos as measured in the fluid frame.
At this point, we still have to handle a 6+1 dimensional problem that, if we want to ensure a sufficient resolution and accuracy for proper modeling, would computationally be too expensive.
Hence, we need to work out a partial differential equation on the physical 3D space that can capture the main features of radiation even without fully solving for f in the momentum space.
In this regard, Thorne <cit.> showed that it is convenient to decompose the intensity of radiation I=ν^3 f and the source S_ coll in multipoles of the radiation momentum p^α
M^α_1 ... α_k (x^β) := ∫_0 ^∞dν ν^3 ∫dΩf(ν,Ω,x^μ)/ν^kp^α_1...p^α_k ,
S^α_1 ... α_k (x^β) := ∫_0 ^∞dν ν^3 ∫dΩS_ coll(ν,Ω,x^μ,f)/ν^kp^α_1...p^α_k .
Plugging this ansatz into the Boltzmann Equation, Eq. (<ref>), one can derive the following evolution equations for every radiation moment M^A_k:
∇_β M^A_k β - (k-1)M^A_k βγ∇_γu_β = S^A_k,
where A_k is a multi-index of order k; cf. Ref. <cit.>.
In this work, we will focus on the second-order multipole M^αβ, which is known to be equal to the stress-energy tensor of radiation. It can be decomposed employing the laboratory frame or employing the fluid frame by choosing two different decompositions of the radiation momentum p^α:
fluid frame: p^α = ν (u^α + ℓ^α),
laboratory frame: p^α = ν' (n^α + l^α),
where ν and ν' are neutrino frequency in the fluid and lab frame, respectively[Note that ν appearing in the integrals of Eqs. (<ref>) and (<ref>) is always the frequency in the fluid's frame.], with the constraints u^αℓ_α = n^αl_α = 0 and l^α l_α = ℓ^αℓ_α = 1.
In such a way, we can express the radiation stress-energy tensor in the fluid frame as
T_ rad^αβ = M^αβ = Ju^αu^β + u^αH^β + H^αu^β + 𝒦^αβ,
with u^αH_α=S^αβu_α=0 and
J := ∫_o^∞dν ν^3 ∫dΩ f(ν,Ω, x^μ),
H^α := ∫_o^∞dν ν^3 ∫dΩ f(ν,Ω, x^μ) ℓ^α,
𝒦^αβ := ∫_o^∞dν ν^3 ∫dΩ f(ν,Ω, x^μ) ℓ^αℓ^β,
representing respectively the energy, the momentum, and the stress-energy tensor measured in the fluid frame. We will use these variables to express source terms since interaction rates of radiation with matter are usually evaluated in the fluid frame, in particular, we will write the 1st source multipole following Shibata et. al. <cit.> as
S^α = η u^α -κ_a J u^α -(κ_a + κ_s)H^α,
with κ_a being the absorption opacity, κ_s being the scattering opacity, and η being the emissivity.
In our work, we incorporate neutrino emission, absorption, and elastic scattering into the source term, but we neglect inelastic scattering.
In general, the emissivity η and the opacities κ_a, κ_s depend both on the fluid properties, namely on the density of the matter ρ, the temperature T, and the electron fraction Y_e, but also on the neutrino spectrum.
Unfortunately, the latter information is not available in our formalism since we only evolve averaged quantities. This represents one of the weaknesses of the employed scheme. Hence, to enable dynamical simulations, we have to employ additional assumptions that we will outline in the following.
§.§ M1 evolution equations
Ref. <cit.> showed that fluid-frame variables are not suitable for obtaining a well-posed system of partial differential equations in conservative form. For such a purpose, we need to perform a decomposition in the laboratory frame as
T_ rad^αβ = M^αβ = E n^α n^β + F^αn^β + F^βn^α + P^αβ,
with n^αF_α = P^αβn_α = F^t = P^tα = 0, in this case, E, F^i, and P^ij represent, respectively, the energy density, momentum density, and stress tensor as measured in the lab frame and are defined in an analogous way as their fluid frame equivalents.
We can work out laboratory frame variables starting from the fluid frame ones and vice versa performing different projections of T_rad^αβ. For our work, we will use:
J = W^2 E - 2 W F^iu_i + P^iju_i u_j,
H^α = (EW - F^iu_i)h^α_ βn^β + Wh^α_βF^β - h^α_iu_jP^ij,
where we have defined the 3-metric in the fluid frame as
h^αβ = g^αβ + u^αu^β.
Ref. <cit.> showed that by decomposing M^αβ as in Eq. (<ref>) and plugging it into Eq. (<ref>) with k=1, we can get the following conservative evolution equations for the energy and momentum of neutrinos:
i + j + k ∂_t Ẽ + ∂_j (αF̃^j - β^j Ẽ)
= α (P̃^ij K_ij - F̃^j∂_j ln(α) - S̃^αn_α),
∂_tF̃_̃ĩ + ∂_j(αP̃^j_i - β^jF̃_̃ĩ)
= (-Ẽ∂_i α + F̃_k∂_iβ^k + α/2P̃^jk∂_i γ_jk + αS̃^αγ_iα),
where K_ij is the extrinsic curvature of the spatial hypersurface, and we defined the densitized variables Ẽ=√(γ)E, F̃^i=√(γ)F^i, P̃^ij=√(γ)P^ij and S̃^α = √(γ)S^α. Here, we can clearly see that the source terms of this equation can be divided into two categories, the gravitational ones, proportional to the first derivatives of the metric and the gauge, and the collisional ones, proportional to S̃^α. While the former is responsible for effects like neutrino path bending and gravitational blueshift/redshift, the latter describes neutrino emission, absorption, and scattering by the fluid.
§.§ Closure relation
Since we do not have an evolution equation for P_ij, we can only estimate it from E and F^i. We follow the prescription discussed in <cit.>:
P^ij = 1/2(3χ(ζ)-1)P^ij_ thin + 3/2(1-χ(ζ))P^ij_ thick,
with P_ thin and P_ thick being the closures in the optically thin and thick regimes, respectively, and the quantity χ is called Eddington factor. The Eddington factor was introduced to model the transition from a trapped radiation regime (χ=1/3) to a free streaming radiation regime (χ=1). In this work, we use the so-called Minerbo closure <cit.>
χ(ζ) = 1/3 + ζ^2 6 - 2ζ + 6ζ^2/15,
where
ζ^2 = H^αH_α/J^2,
is the closure parameter. We expect ζ→ 1 for free streaming radiation and ζ→ 0 for trapped radiation.
The free streaming closure can be expressed as <cit.>:
P^ij_ thin = E F^i F^j/F^2.
The computation of P^ij_ thick is more elaborated since the thick closure must be defined in such a way to be isotropic in the fluid frame, i.e, we want
𝒦^αβ_ thick = 1/3J h^αβ.
Refs. <cit.> showed that 𝒦^αβ_ thick of Eq. (<ref>) leads to
P^ij_ thick = 4/3 J_ thick W^2 v^i v^j + 2 W v^(iγ^j)_αH^α_ thick + 1/3J_ thickγ^ij,
with
J_ thick = 3/2W^2+1 [ E(2W^2-1) - 2W^2F^iv_i ],
γ ^i_α H_ thick^α = F^i/W - 4/3J_ thickWv^i + W[F^i v_i -E + J_ thick]v^i.
The system composed of Eqs. (<ref>) and (<ref>) is proven to be strongly hyperbolic using the closure (<ref>) as long as the causality constraint E ≤ |F| is satisfied. Equation (<ref>) also guarantees that characteristic velocities of the system (<ref>, <ref>) are not superluminal.
Similar to other implementations of this scheme, e.g., <cit.>, we divide neutrinos into three species: ν_e, ν̅_e, and ν_x, with this last species collecting all heavy neutrinos and respective anti-neutrinos together. In this way, we are solving three M1 systems (<ref>, <ref>) coupled to each other only through the fluid.
§.§ Neutrino number density
The previously described scheme still misses any information about the neutrinos energy spectrum, which will be important for having an accurate estimate of the fluid's neutrino opacities κ_a and κ_s (since cross sections of the involved processes are strongly dependent on neutrino energy). The simplest way to improve the previous scheme in this sense is adding the evolution of neutrino number density in such a way to be able to get the neutrino average energy ⟨ϵ_ν_i⟩ in every point.
In our implementation, we set up the neutrino number evolution following <cit.> and <cit.>,
i.e., through the transport equation
∇_α (n f^α) = η_n - κ_n n,
where n is the neutrino number density in the fluid frame and η_n and κ_n are the neutrino number emissivity and opacity respectively and nf^α is the 4 dimensional number flux. According to Ref. <cit.> we chose
f^α = u^α + H^α/J,
in such a way that the projection of nf^α along u^α gives the neutrino number density in the fluid frame:
n = -nf^α u_α.
Expressed in slice adapted coordinates, Eq. (<ref>) reads
∂_t (α√(γ) n f^0) + ∂_i(α√(γ)nf^i) = α√(γ)(η_n - κ_n n),
which is a transport equation for the conservative variable
N := α√(γ)nf^0.
Finally, we can find f^0 and f^i using the definition of slice adapted coordinates:
α f^0 = -f^αn_α = W - H^αn_α/J,
f^i = Wv^i + γ^i_αH^α/J - β^i f^0.
We solve Eq. (<ref>) together with Eq. (<ref>) and Eq. (<ref>) to get a complete and closed system of hyperbolic transport equations in conservative form.
Note that in this formulation, the average energy of neutrinos in the fluid frame can be simply obtained by:
⟨ϵ_ν⟩ = J/n.
§.§ Coupling to hydrodynamics
To model the exchange of energy and momentum between neutrinos and the fluid, we modify the conservation of the matter's stress-energy tensor into
∇_βT_ matter^βα = - ∑_ν_i S^α_ν_i,
where the sum runs over all three neutrino species. This means
∂_t τ = standard hydro rhs + ∑_ν_iα n^αS̃_α,ν_i,
∂_t 𝒮_i = standard hydro rhs - ∑_ν_iαγ_i^αS̃_α,ν_i,
where τ and 𝒮_i are the conservative internal energy and momentum in the standard Valencia formulation of GRHD.
We also take the variation of the electron fraction of the fluid into account and solve the transport equation
∇_α(ρ Y_e u^α) = m_b ℛ,
with a source term given by interaction with neutrinos
ℛ = -∑_ν_isign(ν_i) (η_n,ν_i - κ_n,ν_i n_ν_i),
where
sign(ν_i)=
1, if ν_i = ν_e,
-1, if ν_i = ν̅_e,
0, if ν_i = ν_x,
is a function that accounts for different signs of contributions given by different neutrinos species.
§.§ Opacities and emissivities
Within our gray scheme, we evolve energy-integrated variables and lose information about the neutrino spectrum. This means opacities contained in Eqs. (<ref>) and (<ref>) represent effective frequency-averaged quantities.
In the case of neutrinos in thermal equilibrium with the fluid, we can define the equilibrium opacities as
κ^ eq_a,s = ∫_0^+∞κ_a,s(ϵ) I^ eq(ϵ, T, μ) dϵ/∫_0^+∞ I^ eq(ϵ, T, μ) dϵ,
κ^ eq_n = ∫_0^+∞κ_a(ϵ) n^ eq(ϵ, T, μ) dϵ/∫_0^+∞ n^ eq(ϵ, T, μ) dϵ,
where ϵ is the neutrino energy, I^ eq and n^ eq are the spectral energy density and number density at equilibrium, respectively. T is the fluid's temperature and μ is the neutrino chemical potential at equilibrium. We assume I^ eq∼ϵ^3 f_ FD(ϵ, T, μ) and n^ eq∼ϵ^2 f_ FD(ϵ, T, μ) with f_ FD being the ultrarelativistic Fermi-Dirac distribution function.
The fluid's temperature T is one of the primitive variables provided by the hydrodynamic sector in our new BAM implementation <cit.> while the chemical potential at equilibrium μ is obtained by the nuclear EoS table. The latter actually provides the chemical potential for e^-, n, and p. Based on these, we can compute the potentials for neutrinos assuming β-equilibrium, i.e.
μ_ν_e = μ_e^- + μ_p - μ_n, μ_ν_e = - μ_ν_e, μ_ν_x = 0.
The frequency-dependent opacities κ_a,s(ϵ) are obtained from the open source code <cit.> available at <http://www.nulib.org>. For a given EoS, they are evaluated as functions of the fluid's rest mass density ρ, temperature T_f, and electron fraction Y_e and given in the form of a 4D table.
For every value of ϵ in the opacity table, we perform a 3D interpolation with respect to the other three variables (ρ, T, Y_e) to get κ_a,s(ϵ). Finally, we use those values to discretize and evaluate the integrals in Eqs. (<ref>-<ref>) to obtain the desired opacities.
For our work, we use 400 points for ρ, 180 for T, 60 for Y_e, and 24 for ϵ.
Table <ref> lists all reactions taken into account for the calculation of the spectral opacities and the emissivities with related references about the calculation method. We note that, in principle, could include more reactions.
As also reported in Ref. <cit.>, tables give an unphysically high opacity in regions with ρ<10^11 g/cm^3 and T ≲ 0.35 MeV. This is because of blocking factors that are applied to the absorption opacities for ρ>10^11 g/cm^3. Unfortunately, the application of blocking factors in lower-density regions leads to numerical issues for 1 MeV ≲ T ≲ 30 MeV.
Therefore, we modified the original code to extend the domain of application of absorption blocking factors to the regions where T<0.35 MeV and Y_e ≶ 0.4 (> for ν_e, < for ν_e) independently on ρ, in addition to the region ρ>10^11 g/cm^3. This ensures that we obtain a smooth table that is free of unphysical absorption opacities that were previously affecting the low-density and low-temperature regions.
So far, we assumed neutrinos to be in equilibrium with the fluid. However, this is, in general, not the case. Since the cross sections of neutrinos scale with ϵ^2, the assumption of neutrinos at equilibrium with the fluid would lead to an underestimate of opacities when hot neutrinos out of equilibrium cross a region of cooler fluid. To take this energy dependence into account, we apply the correction
κ_a,s,n = κ_a,s,n^eq( T^ν_ eff/T)^2,
which is also used in most other grey M1 implementations <cit.>. Where T^ν_ eff is the effective temperature of neutrinos.
To obtain T^ν_ eff, we assume neutrinos spectrum to be Planckian with temperature T^ν_ eff and reduced chemical potential η_ν = μ/T_f. We can then evaluate the average neutrino energy as
⟨ϵ_ν⟩ = F_3(η_ν)/F_2(η_ν) T^ν_ eff,
with F_k being the Fermi integral of order k. Since we know ⟨ϵ_ν⟩ = J/n, we can solve Eq. (<ref>) for
T^ν_ eff = F_2(η_ν)/F_3(η_ν)J/n,
and plug T^ν_ eff into Eq. (<ref>) to obtain the corrected opacity.
EoS tables only provide chemical potentials of neutrinos at thermal equilibrium with the fluid. When neutrinos get decoupled from the latter, we expect to approach a distribution with zero chemical potential, i.e., the distribution describing a fixed number of particles. As in <cit.>, to qualitatively account for this transition, we are evaluating the reduced chemical potentials of Eq. (<ref>) in the following way:
η_ν =μ/T(1 - e^-τ),
where τ is the optical depth provided by the NLS of <cit.>.
Finally, once we have set κ_a,s,n, we remain with η and η_n to be set. For that, we assume the Kirchhoff law
η = κ^eq_a 4π/(hc)^3 F_3(η_ν) T^4,
η_n = κ^eq_n 4π/(hc)^3 F_2(η_ν) T^3.
Such a choice ensures that neutrinos thermalize with the fluid and reach the expected thermal equilibrium state when trapped.
§ NUMERICAL SCHEME
We implemented the above multipolar formalism for radiation transport as a new module in the BAM code <cit.>, and will provide implementation details below.
§.§ Closure factor
Equation (<ref>) cannot be evaluated directly since H^α and J are functions of P^ij (and so of ζ); cf. Eqs. (<ref>) and (<ref>).
Hence, the closure factor must be found by solving the following implicit equation
ζ^2 J^2(ζ) - H_α(ζ)H^α(ζ)/E^2 = 0
using a root finder algorithm.
In our implementation, we solve Eq. (<ref>) for ζ using a Dekker algorithm <cit.>, which improves the convergence speed compared to the bisection scheme.
§.§ Fluxes
To evaluate the numerical fluxes at cell interfaces, we follow Ref. <cit.>. Given a field variable u and its flux ℱ(u), we employ a linear combination of a low-order diffusive flux ℱ_ LO and a second-order non-diffusive flux ℱ_ HO so that:
ℱ_i+1/2 = ℱ_ HO(u_i+1/2) - A [ ℱ_ HO(u_i+1/2) - ℱ_ LO(u_i+1/2) ] ,
where A = min(1, 1/κΔ x) and κ = (κ_s,i + κ_s,i+1 + κ_a,i + κ_a,i+1)/2.
This ansatz leads to ℱ_i+1/2 = ℱ_ LO in the free streaming regime and to ℱ_i+1/2≃ℱ_ HO in the scattering/absorption regime.
The low-order diffusive flux is computed using fluxes at the cell center, and a local Lax-Friedrichs (LLF) Riemann solver <cit.>:
ℱ_ LO(u_i+1/2) = ℱ(u_i) + ℱ(u_i+1)/2 - λ_ amaxu_i+1 - u_i/2,
with
λ_ amax = max _a ∈{i, i+1} b ∈ [1,2] { |λ^b_a| },
where λ^b are the characteristic velocities of the system. This choice ensures the monotonicity preservation of the solution in case of shocks and leads to better stability in the free streaming regime due to an increased numerical dissipation.
Certainly, in some cases, the latter can also be a disadvantage, e.g., in the case of radiation in an optically thick medium, it would introduce an unphysical diffusion, leading to a wrong estimation of the neutrinos diffusion rate. To avoid this effect, we employ the following non-diffusive scheme in optically thick regions:
ℱ_ HO(u_i+1/2) = 1/2 [ ℱ(u_i) + ℱ(u_i+1) ].
Our choice of ℱ_ HO cures the unphysical diffusion of ℱ_ LO, but in case of shocks, it can violate monotonicity preservation.
To make the scheme described by Eq. (<ref>) able to handle shocks in a thick regime without adding unphysical diffusion in smooth regions, we first compute ℱ in every point using Eq. (<ref>) and then set ℱ = ℱ_ LO if one of the following conditions is satisfied:
* Δ^n_i-1Δ^n_i < 0 or Δ^n_i Δ^n_i+1 < 0, i.e., if the solution at the current time step shows an extremum.
* Ẽ^n+1_i≤ 0 or Ẽ^n+1_i+1≤ 0, i.e., if the energy solution at the next time step would be overshoot to a negative value.
* Δ^n+1_i-1/Δ^n+1_i < 1/4 or Δ^n+1_i/Δ^n+1_i+1 < 1/4, i.e., if the solution at next time step would develop an extremum or if the change in the slope happens too quickly,
where
Δ_i^n = u^n_i+1 - u^n_i,
u^n+1_i = u_i^n - Δ t/Δ x ( ℱ^n_i - ℱ^n_i-1 ).
Characteristic velocities are constructed as a linear combination of thin and thick velocities with the same coefficients used to compose the closure in Eq. (<ref>). We chose to employ the following velocities in a generic i^th-direction:
λ_ thin ^ 1,2 = -β^i ±α |F^i|/|F|,
λ_ thick ^1,2 = -β^i ±α√(γ^ii/3).
In our tests, this particular choice increases the stability of our scheme without affecting the accuracy. Thin velocities are the same as employed in <cit.> while the thick ones are taken from <cit.>. For a complete list and discussion of the characteristic velocity, we refer to <cit.>.
We note that, in principle, Eq. (<ref>) is second-order accurate in the diffusive region far away from shocks or solution's extrema and first-order accurate in the free streaming regime.
§.§ Implicit-explicit time step
Scattering and absorption opacities, even in the geometrized units handled by BAM, can reach large values up to 10^3Δ t in very thick regions, e.g., in the neutron star interior. For such values, the collisional source terms S^α of Eqs. (<ref>, <ref>) become stiff, i.e., we have to treat the terms through an implicit scheme. Fluxes and gravitational sources are instead handled via a second-order explicit-implicit method.
A full time step of our radiation evolution algorithm is given by:
q^* - q^n/Δ t = - ∂_i F^i(q^n) + G(q^n) + S_coll(q^*),
q^n+1 - q^n/Δ t = - ∂_i F^i(q^*) + G(q^*) + S_coll(q^n+1),
where q = (Ẽ, F̃_i, N), and F^i, G, and S_coll represent fluxes, gravitational-source terms, and collisional source terms of Eq. (<ref>), (<ref>), and (<ref>), respectively. This method is 2nd-order accurate in fluxes and gravitational terms but only 1st-order accurate in the implicit terms.
Hydrodynamics variables are kept constant during the radiation substep of Eq. (<ref>), and are updated after the second step using S_coll(q^n+1). Analogously radiation variables are kept constant during the hydrodynamics and spacetime evolution substeps, which are performed ignoring radiation-fluid interactions.
The application of a partially implicit method requires the solution of a system in Ẽ^n+1, F̃^n+1_i, and N^n+1.
The last variable is decoupled from the rest of the system since its implicit time step can be written as
N^n+1 - N^n/Δ t = -∂_i ℱ^i_N(N^n) + α√(γ)η_n - κ_n N^n+1/f^0,
which can be solved straightforwardly for N^n+1 as:
N^n+1 = N^n -∂_i ℱ^i_N(N^n) Δ t + α√(γ)η_n Δ t/1 + k_n/f^0Δ t .
Solving for the neutrino momenta, unfortunately, requires more effort. We follow the linearized scheme of <cit.>. Plugging the expression of J and H^α in Eqs. (<ref>) and (<ref>), into the definition of S^α in Eq. (<ref>), and linearizing assuming ζ and F^i/|F| to be constant, we can write
S̃_α^n+1 = Ẽ^n+1 A_α + F̃^n+1_iB^i_α + √(γ)η u_α,
where A^α and B^iα are tensor functions of κ_a, κ_s, ζ, F^i/|F|, and u^α only. In our implementation, we solve for the implicit time step using the value of ζ and F^i/|F| at time step n. This is necessary for obtaining a linear system in (Ẽ^n+1,F̃_i^n+1). Plugging this expression into Eq. (<ref>), we get a system of four linear equations for four variables, whose analytical solution can be found in Appendix <ref>.
We have pointed out that the scheme used for collisional terms is not fully implicit since A^α and B^iα are also dependent on neutrino variables, and the values of the fluid's opacities are not updated according to the radiation-fluid interaction at each substep.
However, the solution of a fully coupled implicit system would require a non-linear root finder and an update of fluid variables at each substep with a significantly higher computational cost. For this reason, we limit ourselves to this linearized implicit scheme, which we found to be enough to ensure the stability of the code.
Finally, it is worth mentioning that other M1 implementations <cit.> treat the nonlinear terms of S_coll implicitly. This is equivalent to treating the ratio f^i = F^i/|F| as a variable at time n+1 and solving for a four-dimensional nonlinear root-finding problem. This scheme is believed to be more accurate in describing the interaction of radiation with a fast-moving fluid since it handles better the terms proportional to v · f. However, in the next section, we will show that our linearized scheme properly captures the advection of trapped radiation by a moving fluid, which is a stringent test that must be satisfied by a radiation transport code oriented to the simulation of BNS mergers.
§.§ Neutrino right-hand side routine
In the following, we summarize the steps followed to evaluate the full right-hand side (RHS) of the neutrino sector from E, F^i, and N, i.e., Eqs. (<ref>, <ref>, <ref>):
* We check for causality constraint E ≤ |F|, if it is not satisfied we set E=|F| by rescaling F^i. This step is necessary to ensure the causality and hyperbolicity of the scheme.
* We evaluate the closure factor ζ solving Eq. (<ref>). We use ζ to evaluate the fluid frame energy J and the neutrino number density n.
* We set opacities using the scheme described in the previous section.
* We compute fluxes at the cell's interfaces using Eq. (<ref>).
* We check whether the reconstructed fluxes satisfy one of the conditions listed above. If they do, we recompute them using Eq. (<ref>).
* We add the fluxes divergence to the RHS.
* We evaluate the gravitational source terms of Eq. (<ref>) and (<ref>) and add them to RHS.
* We solve Eq. (<ref>) for E^n+1, F_i^n+1, and N^n+1 using the explicit part of RHS we have evaluated in the previous points.
All the steps listed above are repeated for the three neutrino species.
§ NUMERICAL TESTS
§.§ Geodesics
To test the fluxes and gravitational sources, we set up a test employing Kerr-Schild spacetime with zero angular momentum, where we shoot a beam of free streaming neutrinos with Ẽ = |F̃| = 1 from left to the right of our numerical domain.
In Fig. <ref>, the neutrino beam is injected in the simulation tangentially to the black hole (BH) horizon at a coordinate distance 5M to 5.5M from the singularity (which is located in the origin of the coordinate system). The red lines in the figure show light-like geodesics that neutrinos at the top and bottom of the beam are supposed to follow. The whole beam should then be contained between these two lines.
We observe that most of the neutrino energy remains confined between the two geodesics with a small part that is dispersed outside, mostly because of the low-order reconstruction scheme employed to handle the free streaming region, which introduces numerical dispersion. This interpretation is strengthened by the fact the dispersion happens on both sides of the beam and decreases with increasing grid resolution. However, we expect that this is not an issue in BNS simulations since we do not expect to have sharp variations of the energy density in free streaming regions as in this test case.
§.§ Absorption
To test the collisional source terms that model the neutrino-fluid interaction in the pure absorption regime, we set up two tests on a flat spacetime, one with a static and one with a stationary moving fluid. In both tests, we shoot a beam of neutrinos similar to the previous case.
Since, in these conditions, neutrinos should be only absorbed and not scattered, we expect their regime to remain purely thin and the momentum vectors to remain parallel to each other.
In Fig. <ref>, we show a wide beam of neutrinos moving from left to right encountering a sphere of matter with κ_a=0.5 in the center and decreasing radially as a Gaussian. As expected, neutrino momenta remain parallel to each other, and as a consequence, the region behind the sphere receives a much smaller amount of radiation when compared to regions on the sides, projecting a very clear shadow on the right edge of the simulation domain.
In the second absorption test, shown in Fig. <ref>, we distribute matter on a vertical tube with homogeneous properties, κ_a=0.05 and v_y = 0.5. As expected, we observe that a part of the radiation is absorbed by the fluid, and another part passes through it without being scattered. In contrast to the previous test, the fluid is not at rest. Hence, the test is well suited to probe the conversion between the fluid and the laboratory frame, which were equal in our previous test, i.e., ζ=1 was trivially satisfied along the beam.
In this new test, we still expect to find ζ=1. However, since now H^αH_α≠ F_i F^i and E ≠ J, this is not trivial anymore.
Based on the success of the test, we can conclude that the root finder algorithm used to evaluate ζ is converging to the correct solution.
§.§ Advection
Advection of trapped radiation in a moving fluid is one of the most challenging situations that our code has to handle. To test such a scenario, we set up a test similar to the one shown in Sec. 4 of Ref. <cit.>, i.e., we evolve a one-dimensional Gaussian neutrino packet trapped in a homogeneous fluid moving at mildly relativistic velocity with stiff, pure-scattering opacity.
As initial conditions, we chose
Ẽ(t=0,x) = e^-x^2, J = 3E/4W^2-1, F_i = 4/3JW^2 v_i.
As shown in <cit.>, with this condition for F_α, it is ensured that H^α=0, i.e., we model a fully thick regime. For the fluid, we chose κ_s=10^3 and |v|=v_x=0.5. We use a single uniform grid with Δ x = 0.05 and employ a Courant-Friedrich-Levi (CFL) factor of 0.25. We test two different flux reconstruction schemes to check whether they can capture the correct diffusion rate in the regime k_s Δ x ≫ 1. In this article, we test two different schemes: a constant reconstruction (u_i+1/2=u_i) with an LLF Riemann solver (Eq. (<ref>)) and the composed flux of Eq. (<ref>) proposed in <cit.>.
Results are shown in Fig. <ref> together with the reference solution, which we assume to be the advected solution of the diffusion equation
Ẽ(x,t) = 1/√(1+4Dt)exp[-(x-v_xt)^2/1+4Dt],
with D=1/(3κ_s) being the diffusivity. We see that the lowest order reconstruction scheme [Eq. (<ref>)] fails in reproducing the correct diffusion rate because of its intrinsic numerical dispersion. The scheme used in <cit.>, instead, performs better except near the maximum, where it reduces again to the lower order one. Moreover, we observe no unphysical amplification of the package, contrary to the test performed using library <cit.> in <cit.>. In our implementation, we find that both the neutrino energy and the neutrino number are advected with the correct velocity.
To test the robustness of the scheme, we performed an additional test with identical fluid configuration but neutrinos' initial data given by a step function. Results at time t=4 are shown in Fig. <ref> for different resolutions together with the reference solution
Ẽ(x,t) = 1/2 [ 1 - erf (x-v_x t/2√(Dt) ) ],
with erf being the error function.
This test shows that the flux reconstruction, Eq. (<ref>), together with the linearized collisional sources, Eq. (<ref>), can handle shocks even in the presence of stiff source terms preserving the monotonicity of the solution and with a numerical dispersion that decreases with the increase of resolution.
§.§ Uniform Sphere
The uniform sphere test is the closest configuration to an idealized star for which we have an analytical solution of the Boltzmann equations <cit.>. For this reason, several groups have shown such simulations to test their implementations, e.g., Refs. <cit.>. It consists of a sphere of radius r_s=1. In its interior, we set κ_a=η= constant and κ_s=0. We set up this test on a 3-dimensional Cartesian grid with Δ x = Δ y = Δ z = 0.05 imposing reflection symmetry with respect to x-y, x-z and y-z planes. Evolution is performed using a RK3 algorithm with a CFL factor of 0.25. We perform this test with two different opacities κ_1 = 5 and κ_2=10^10 to test different regimes; cf. <cit.>.
Figure <ref> shows both the numerical and analytical Ẽ as a function of the radius for the opacities. The numerical solution is taken at t=12 along the diagonal x=y=z. Overall, we find a good agreement between our numerical result and the analytical solution of the Boltzmann equation, comparable to the results obtained in other works. However, we point out that one cannot expect to converge to the exact solution since the M1 scheme is only an approximation to the Boltzmann equation and is only exact in the fully trapped or free streaming regimes (without crossing beams).
§.§ Single isolated hot star
We evolve a single isolated hot neutron star employing the SFHo EoS <cit.>. Initial data are constructed by solving the TOV equations with the assumption of constant entropy and beta-equilibrium as in <cit.>. For the integration of the TOV equation, we choose ρ_c = 8.65 × 10^14 g/cm^3 and an entropy per baryon s = 1k_B. This leads to a total baryonic mass M_ bar = 1.64 M_⊙, which corresponds to a gravitational mass of 1.52 M_⊙, a coordinate radius R=9.8 km, and a central temperature of 27.8 MeV. We evolve the system on a grid with a grid spacing of Δ x = Δ y = Δ z = 182 m using a CFL factor of 0.25. In this test, we evolve the hydrodynamics with the module of <cit.> using 4th order Runge-Kutta (RK4) integration algorithm and WENOZ <cit.> primitive reconstruction with LLF Riemann solver for the fluxes at cell interfaces.
Fig. <ref> shows the transition of neutrinos from trapped to the free streaming regime on the surface of the star. As expected, the neutrino energy density reaches its peak in the star's core due to the higher density and temperature of the fluid in this region. Moreover, we can observe that neutrinos inside the star have a zero average momentum since they constitute a particle gas in thermal equilibrium with the fluid, and the transport phenomena are negligible. When the optical depth τ drops below 2/3, interactions with the fluid start becoming subdominant, and neutrinos start traveling freely, developing an average momentum in the radial direction.
Another important consequence of the neutrino-baryon decoupling can be seen in Fig. <ref>, where we can observe all three species of neutrinos being thermalized with the fluid in the inner part of the star and decoupling next to the relative photosphere at three different temperatures. After decoupling, the neutrino temperature remains constant due to the lack of interactions with the fluid. The average energy hierarchy is, as reported in the literature, ⟨ϵ_ν_e⟩ < ⟨ϵ_ν_e⟩ < ⟨ϵ_ν_x⟩ <cit.>.
§ BINARY NEUTRON STAR MERGERS
§.§ Configurations and Setup
We run 10 different BNS configurations employing two different EoSs (SFHo <cit.> and DD2 <cit.>) with the same total baryonic mass of 2.6 M_⊙, and two different mass ratios of q=M_1/M_2=1 and q=1.2, where M_i is the gravitational mass of the i-th star. All binary systems are considered to be irrotational, i.e., the stars are non-spinning. Further details about the setups are given in Table <ref>. We run the simulations with SFHo EOS and neutrino transport at two different resolutions: R1 with 96 points per dimension in each of the two finest boxes covering the stars. This corresponds to a grid spacing in the finest level of Δ x_ min = 248 m and Δ x_ max = 31.8 km in the coarsest one. R2 with 128 points on each finest box for Δ x_ min = 186 m and Δ x_ max = 23.8 km on the coarsest level. Initial data was produced using the pseudo-spectral code SGRID <cit.> under the assumption that matter is in beta-equilibrium with a constant initial temperature of T=0.1 MeV; cf. <cit.>.
The proper initial distance between the stars' centers is set to 38 km. This corresponds to about three orbits before the merger of the stars. Given that we will primarily focus on the post-merger evolution, we did not perform any eccentricity reduction procedure. Both spacetime and hydrodynamics variables are evolved using a method of lines with RK4 algorithm with a CFL factor of 0.25.
Time evolution is performed using a Berger-Oliger algorithm with eight refinement levels. The two finest refinement levels are composed of two moving boxes centered around the stars.
Spacetime is evolved employing the Z4c formulation <cit.>. It is discretized using a finite difference scheme with a fourth-order centered stencil for numerical derivatives. Lapse and shift are evolved using 1+log slicing <cit.> and gamma-driver conditions <cit.> respectively.
For hydrodynamic variables we use a finite volume scheme with WENOZ <cit.> reconstruction of primitives at cell interfaces and HLL Riemann solver <cit.> for computing numerical fluxes. We apply the flux corrections of the conservative adaptive mesh refinement <cit.> to the conservative hydrodynamics variables but not to the radiation fields.
§.§ Ejecta
We compute ejecta properties using a series of concentric spheres centered around the coordinate origin with radii varying from 300 km to 1000 km. On each sphere, the total flux of mass, energy, and momentum of outgoing, unbound matter is computed. On such extraction spheres, the matter is assumed to be unbound according to the geodesic criterion <cit.>, i.e., if
u_t < -1 and u_r>0.
From now on, we will always refer to the unbound mass as the one that satisfies this criterion unless stated otherwise. Differently from previous BAM versions, the spheres with radius 450 km and 600 km also save the angular coordinates (θ,ϕ) of the matter flux together with u_t, ρ, T, and Y_e. This allows a more detailed analysis of the ejecta that includes its geometry and thermodynamical properties, e.g., the use of the Bernoulli criterion <cit.> for determining unbound mass, i.e.,
h u_t < -1 and u_r>0.
Since u_t [or hu_t for Bernoulli] is assumed to be conserved and at infinity u_t=-W [or hu_t=-W], it is also possible to compute the asymptotic velocity v_∞ of each fluid element as v_∞ = √(1 - 1/u_t^2) [or √(1-1/(hu_t)^2)].
§.§.§ Ejecta Mass
Mass ejection from BNS mergers within a dynamical timescale 𝒪(10 ms) has already been the subject of several detailed studies, e.g., <cit.>. There is a general consensus on dividing dynamical ejecta into two components: tidal tail and shocked ejecta. The former is composed of matter shed from the star's surface right before the merger due to tidal forces. Since this matter does not undergo any shock heating or weak interaction, it has a low Y_e comparable to the one of neutron stars' outer layers and low entropy ≲ 10 κ_B. Shocked ejecta, on the opposite, is launched by the high pressure developed in the shock formed at the star's surface during the plunge. It has significantly higher entropy and Y_e with respect to the tidal tails. It is produced later but with higher velocity, rapidly reaching the tidal tails and interacting with them <cit.>.
In Fig. <ref>, we show the mass of the unbound matter moving through the detection sphere at r≃450 km as a function of time for both geodesic and Bernoulli criteria. There is an important qualitative difference between simulations where neutrinos are neglected and the ones including neutrino transport. While in the former case, the ejecta mass saturates within 20 ms after the merger, in the latter one, we observe a non-negligible matter outflow continuing for the whole duration of the simulation, although with decreasing intensity. Such a phenomenon has been observed in other BNS simulations with M1 transport in <cit.>, where a very similar numerical implementation of M1 is used, and in much smaller amount also in <cit.>. We attribute it to the neutrinos emitted from the remnant. Through scattering/absorption processes in the upper parts of the disk, they can indeed accelerate material, making it gravitationally unbound. This hypothesis is consistent with what we see in Fig. <ref>, where we show the conserved mass density for bound and unbound matter on the xz-plane roughly 45 ms after the merger. We denote by D_u the conserved mass density of unbound matter, i.e., D_u=D where matter is unbound and D_u=0 otherwise. Most of the unbound matter is concentrated in the inner part of the upper edge of the disk, as we would expect from a neutrino wind mechanism powered by the remnant emission.
In particular, in <cit.>, equal mass simulations using the SFHo and DD2 EoSs are performed, and an early neutrino wind mechanism is also observed. However, such simulations only show results up to ≃ 10 ms after the merger.
For both EoSs, the ejecta mass is higher and more rapidly growing for asymmetric configurations. This is in agreement with the higher amount of tidal tails ejecta that asymmetric binaries are known to produce. In the same figure, the amount of ejecta according to the Bernoulli criterion is also shown.
Bernoulli-criterion ejecta corresponds to the geodesic one in the very initial phase of the matter outflow but predicts a significantly higher mass after the dynamical phase. More importantly, Bernoulli ejecta is not close to saturation at the end of the simulation time. These features are comparable with the results of other works, e.g.,<cit.>. This continuous matter outflow is attributed to the so-called spiral wave wind, i.e., the outward transport of angular momentum through the disk due to the shocks.
§.§.§ Electron fraction and velocity
Figure <ref> shows the average of Y_e and v_∞ (⟨ Y_e ⟩ and ⟨ v_∞⟩ respectively), of matter flowing through the detection sphere, located at r≃ 450 km, as a function of time. These quantities are defined as:
⟨ Y_e ⟩ (t) = ∫ dΩ F_D_u (t, Ω) Y_e(t,Ω)/∫ dΩ F_D_u (t, Ω),
⟨ v_∞⟩ (t) = ∫ dΩ F_D_u (t, Ω) v_∞(t,Ω)/∫ dΩ F_D_u (t, Ω),
where r is the radius of the extraction sphere and F_D_u = D_u (α v^r - β^r) is the local radial flux of unbound matter through the detection sphere.
All simulations show an overall monotonically increasing electron fraction since matter ejected later remains longer next to the remnant, having more time for protonizing due to neutrino absorption. In addition, most systems show a more or less pronounced plateau at about 5-15 ms after the merger with a visible dependence on the mass ratio. This is likely due to tidal tails containing material with an almost uniform and low ⟨ Y_e ⟩ of ≃ 0.1. Tidal tails are then reached and partially reprocessed by the faster and more proton-rich shocked ejecta, giving rise to the plateau we observe[Note that this plateau is absent for SFHo_q1_M1 due to the smaller amount of tidal ejecta for this equal mass, soft-EoS configuration.].
⟨ v_∞⟩ has a sharp velocity peak at early times followed by slow late-time ejecta. The fact the initial peak does not show a bimodal shape is another indicator of the fact that tidal and shock ejecta already merged together at the extraction radius. Finally, we see that asymmetric binaries present a higher velocity peak of the early ejecta. This feature is consistent with Fig. <ref> and is responsible for the tails with v∼ 0.5c - 0.7c. The velocity histogram in the same figure shows no dependence on the EoS, with the mass ratio being the only feature determining the velocity profile.
§.§.§ Angular dependence
In the upper panel of Fig. <ref>, we show the normalized polar angle distribution of the ejecta defined as
m_ ej(θ) = r^2 ∫^T_0 ∫_0^2π F_D_u(t,θ,ϕ) dt dϕ,
with T being the final time of the simulation and r the radius of the detection sphere (in this case ≃ 450 km). According to this definition M_ ej = ∫_0^πsin(θ) m_ ej(θ) dθ. Then m_ ej(θ) is normalized by the total mass of the ejecta M_ ej^ tot.
The peak at θ≃ 0.5 due to the post-merger neutrino wind is immediately visible. At lower latitudes, the neutrino wind mechanism is indeed heavily suppressed by the disk, which is cold and optically thick and stops the neutrinos emitted by the remnant (see Fig. <ref>). For asymmetric binaries, there is also a peak at low latitudes visible, caused by the tidal tail ejecta.
The effect of such a component on the electron fraction is visible in the lower panels of the same figure. It is responsible for the lower ⟨ Y_e ⟩ of the equatorial region, and, as expected, it is more evident for asymmetric binaries.
In the same panel, we can also observe that when neutrino wind is included in the ejecta, the ⟨ Y_e ⟩ of the polar regions increases significantly, reaching up to 0.5, while regions with polar angles above one radiant are unchanged by the phenomenon. This is due to the intense neutrino irradiation that this matter received, which increased its electron fraction. The fact that dynamical ejecta from equal mass binaries has an overall higher ⟨ Y_e ⟩ can be explained by the higher amount of shocked ejecta that such configurations are known to produce. Shock ejecta is indeed supposed to have a higher entropy and electron fraction with respect to tidal tail ejecta and is more isotropically distributed. The last characteristic can explain why symmetric binaries give a higher ⟨ Y_e ⟩ than their respective asymmetric counterparts at lower latitudes.
In Fig. <ref>, a histogram of the ejecta's ⟨ Y_e ⟩ is shown. Dynamical ejecta of equal mass binaries produces a fairly uniform distribution of mass with a drop for ⟨ Y_e ⟩≲ 0.1. In the unequal mass scenario, the situation changes. Here, we have indeed a clear peak at ⟨ Y_e ⟩≃ 0.1 produced by tidal tails. In both cases, the inclusion of neutrino wind leads to an increase of ejecta with 0.3 ≲⟨ Y_e ⟩≲ 0.6.
Another important feature of the ejecta that has been investigated in literature is the correlation between Y_e and entropy (s/k_B). In Fig. <ref>, we show a 2D histogram of the total ejecta in these two variables. Most of the ejecta mass lies within a main sequence with a positive monotonic correlation between entropy and Y_e. This is a consequence of the fact that fluid with a higher entropy is characterized by a more proton-rich thermodynamical equilibrium configuration. The exception to this rule is made by matter with Y_e ≲ 0.3 and entropy in a very wide range going up to s ∼ 100 k_B. This matter is present in every simulation and is believed to be a consequence of the interaction between tidal tails and shocked ejecta <cit.>. When the latter hits the former, it generates indeed a violent shock that increases the fluid's entropy. Since this happens at low density, when the neutrino-matter interaction timescale is bigger than the dynamical one, this does not leave time for the fluid to settle to an equilibrium configuration with higher Y_e.
Ejecta's average properties of our simulations are summarized in Table <ref>. Here we see, as expected, a strong dependence of ⟨ Y_e ⟩ on the mass ratio, with asymmetric binaries producing a more neutron-rich and an overall more massive outcome. An imprint of the tidal deformability can also be observed, with the more deformable EoS (DD2) producing less massive but neutron-rich ejecta. For SFHo simulations, the dependence of the ejecta mass and ⟨ Y_e ⟩ on the mass ratio is consistent through all resolutions. We do not observe any significant dependence of the average asymptotic velocity on the mass ratio or tidal deformability.
§.§ Neutrino luminosity
We determine the neutrino luminosity as the total flux of neutrino energy Ẽ through the same series of spheres used for the analysis of the ejecta, i.e., L_ν = r^2 ∫ dΩ (αF̃^r - Ẽβ^r). Similarly to the ejecta detection, also here the two spheres, located at 450 km and 600 km, are able to save the flux angular direction together with its values of J and n, enabling a more detailed study that includes the geometry of neutrino luminosity and its average energy.
Looking at the total neutrino luminosity in the left panel of Fig. <ref>, we find that ν_e emission is brighter in the early post-merger with respect to the other species. Its peaking luminosity of ∼ 10^53 erg/s is consistent with results obtained by similar simulations <cit.>. The initial ν_e burst is a consequence of the fast protonization that the material undergoes right after the merger, when the beta equilibrium is broken, and the system evolves toward a new meta-stable configuration characterized by a higher entropy and Y_e. Approximately 10 ms after the merger, the ν_e starts decreasing and approaches the luminosity of ν_e a few tens of ms later. This is a signal that the system is approaching the weak equilibrium configuration within the late simulation time. Both ν_e and ν_x show similar behavior, with a peak at ∼ 10 ms and roughly half of the intensity of ν_e. In the early post-merger, we have, as reported in the literature, L_ν_e > L_ν_x > L_ν_e. The brightness oscillations that appear in this phase for every neutrino species are due to the remnant oscillations, which cause shocks propagating outward and perturbing the surface of the neutrino sphere. The last inequality is inverted after the luminosity peak. L_ν_x drops faster because of the remnant's cooling. ν_x interactions indeed include only thermal processes that are independent of Y_e. This makes the heavy neutrino emission more sensitive to temperature with respect to other species. The right panel of Fig. <ref> shows the average energy of neutrinos flowing through the detection sphere at r = 450 km as a function of time. As reported in the literature ν_x have significantly higher energy with respect to the other two species in the early post-merger. This is an expected feature since heavy neutrinos are less interacting with matter and decouple at higher densities, where matter is usually also hotter. All the features described above have been already explored in more detail in, e.g., <cit.>.
Finally, in Fig. <ref>, we show the total luminosity for all four configurations at resolution R2. The first observation is that SFHo systems emit significantly more neutrinos than their DD2 counterparts due to the temperature difference visible in Fig. <ref>, with a difference of almost 50% at the brightness peak. Such an important difference could explain, or at least contribute to, the significant difference in the neutrino wind emission between the two EoSs.
§.§ Remnant properties
We begin the analysis of the remnant by looking at Fig. <ref>, showing the evolution of density and temperature maxima. In the left panel, we can observe that maximum density is not significantly affected by neutrino radiation, with differences rarely exceeding 5% during the post-merger oscilation ond settling to smaller values after ≃ 15 ms. Considering the temperature evolution, we find that the maximum temperature for the simulations using SFHo is indeed lower for systems using M1 compared to simulations without evolving the neutrinos. Contrary, the setups employing the DD2 EOS show an almost unchanged maximum temperature.
The difference between M1 and neutrinoless simulations is more pronounced for SFHo EoS because of the higher amount of neutrino energy involved. We explain this result as an indirect effect of neutrino cooling affecting the remnant in the early post-merger.
Since all the binary simulations performed in this work produce a stable massive neutron star (MNS) surrounded by a disk, we decide to adopt the usual convention of defining the disk of a MNS+disk system as the region where matter is gravitationally bound and ρ<10^13 g/cm^3, by contrary the MNS is defined by ρ>10^13 g/cm^3; <cit.>. This allows us to provide an estimate of the mass of the disk and the MNS.
In Fig. <ref>, we show the masses of the disk and the MNS as a function of post-merger time. After an initial time where the disk is growing fast, acquiring mass from the remnant, the disk mass stabilizes at ≃ 20 ms after the merger. Such disk accretion phenomena are usually sustained by angular momentum viscous transport and shocks generated by the m=1 bar mode oscillations of the central object <cit.> and contrasted by the gravitational pull of the central object. The effect of neutrino transport on the disk's mass for SFHo simulations is negligible (see Table <ref>), and the results look robust also at lower resolutions. This is an expected result since the disk formation takes place at times when neutrino cooling is not the dominant source of energy loss.
§.§ Nucleosynthesis
The nucleosynthesis calculations are performed in postprocessing
following the same approach as in <cit.>
employing the results from the nuclear reaction network
of <cit.>.
In Fig. <ref>, we show the abundances as a function of the mass
number A of the different isotopes synthesized by the r-process 32 years after the merger in ejecta.
To compare the results for different simulations, we shift the
abundances from all models such that they are always the same as the
solar one for A=195. The solar residual r-process abundances are taken
from <cit.> (for a review of the solar system abundances; see
<cit.>).
The normalization to A_ sol=195 is chosen as nucleosynthesis in neutron-rich ejecta from BNS mergers was shown to robustly reproduce the third r-process peak <cit.>. We also consider normalization to A_ sol=135 and A_ sol=152 commonly considered in literature <cit.>. The former leads to only a minor qualitative change while the latter leads to the overall overestimation of the abundances at both, second and third r-process peaks.
As the mass-averaged electron fraction of the dynamical ejecta from most models (except SFHo q=1 model) is small (see Fig. <ref>), the r-process nucleosynthesis results in the underproduction of lighter, 1st and 2nd peak elements. Additionally, the elements around the rare-earth peak are underproduced. This can be also attributed to the systematic uncertainties in the simplified method we employ to compute nucleosynthesis yields. The simulation with SFHo EOS and mass-ratio q=1 displays a more flat electron fraction distribution in its ejecta, and relative abundances at 2nd peak are consistent with solar.
The Bernoulli ejecta displays on average higher electron fraction, as it undergoes strong neutrino irradiation, being ejected on a longer timescale. Higher Y_e leads to a larger amount of lighter elements produced. However, the overall underproduction of 1st r-process elements for all simulations but the SFHo q=1 model remains.
§.§ Gravitational waves
While neutrinos are supposed not to play any role during the inspiral, they could, in principle, be relevant in the post-merger dynamics, e.g., through the additional cooling channel of the formed remnant, which might change the compactness of the remnant and, therefore, the post-merger GW frequency and the time until black-hole formation. We investigate this possibility in the following subsection by comparing the GW signal produced by each simulation and its `neutrinoless' counterpart.
We compute the GW strain h on a series of concentric spheres using the Ψ_4 Newman-Penrose scalar <cit.>, following the method of <cit.>.
In Fig. <ref>, we show the GW strain h and its frequency for the dominant (2,2) mode of each simulation.
Overall, one can observe only minimal changes in the GW amplitude and frequency caused by neutrino cooling[We note that the spike at 6 ms for DD2_q12 is due to numerical inaccuracies when computing the instantaneous GW frequency for a GW signal with almost vanishing amplitude.].
Given the large challenge in measuring the post-merger GW signal from future detections <cit.> and the presumably large uncertainties regarded the extracted postmerger frequencies,
we expect that the differences visible here are not measurable, potentially not even with the next generation of detectors.
However, a more systematic study involving Bayesian parameter estimation is needed to verify this hypothesis.
§.§ Lightcurves
To compute the kilonova signal associated with the extracted ejecta profiles from the performed simulations, we use the 3D Monte Carlo radiative transfer code <cit.>. The code allows us to use the 3D simulation output of the unbound rest-mass density D_u and the electron fraction Y_e of the ejecta as input. The required input data represents a snapshot at a reference time t_0 and is subsequently evolved following a homologous expansion, i.e., the velocity v^i of each fluid cell remains constant.
In Appendix. <ref>, we outline the exact procedure employed to obtain input data.
For the generation of photon packets (assigned energy, frequency, and direction) at each time step, employs the heating rate libraries from <cit.> and computes the thermalization efficiencies as in <cit.>. The photon packets are then propagated through the ejecta, taking into account interactions with matter via electron scattering and bound-bound absorption. uses wavelength- and time-dependent opacities from <cit.> as a function of local densities, temperatures, and electron fraction within the ejecta. We perform the radiative transfer simulations with a total of N_ ph = 10^6 photon packets.
In contrast to previous works in which we used <cit.>, we are now able to use the electron fraction of the material directly and do not have to approximate it through the computation of the fluid's entropy. This is an important improvement since this quantity is fundamental in determining the kilonova luminosity and spectrum. Matter with low Y_e (like tidal tails) can indeed synthesize Lanthanides and Actinides, which have high absorption opacities in the blue (ultraviolet-optical) spectrum, making the EM signal redder. On the contrary, high Y_e material (like shocked ejecta and winds) synthesizes lighter elements that have a smaller opacity and are more transparent to high-frequency radiation, i.e., it will produce a bluer kilonova.
In Fig. <ref>, we show the bolometric luminosity for each simulation for five different observation angles: For the pole with Θ = 0^∘, and in the orbital plane with Θ = 90^∘ for Φ = 0^∘, Φ = 90^∘, Φ = 180^∘, and Φ = 270^∘.
In general, we find that the luminosity at the pole is higher than in the equatorial plane, because of the smaller opacities and the higher amount of mass. At the same time, light curves obtained for the four angles in the orbital plane are rather similar in the q=1 simulations. For the systems with unequal mass, the differences are more prominent, but they tend to decrease in time within a timescale of a few days.
This can be explained by the fact that the ejecta input in for these systems is less axisymmetric than for the systems with equal masses (see ejecta maps in Appendix <ref>).
Furthermore, we show in Fig. <ref> the light curves for the four systems in different frequency bands, ranging from ultraviolet to optical and infrared. We focus on one Φ-angle only, i.e., Φ=0^∘. Still, we want to note here that the results for other Φ angles for the systems with unequal masses differ up to about ∼ 1 mag in the first two days after the merger.
We observe that the magnitude difference between polar angles is more pronounced in the ultraviolet and optical bands than in the infrared bands, particularly, in the J- and K-bands.
The light curves for the systems with SFHo EoS are on average brighter due to the larger ejecta mass than systems employing the DD2 EoS (at the same mass ratio). Even more importantly, we observe that the ratio between the blue and the red component of the kilonova is strongly affected by both the EoS and mass ratio, with more deformable EoS (DD2) and asymmetric configurations giving a redder kilonovae due to the bigger amount of tidal tails with respect to shocked ejecta.
Moreover, we find that in the orbital plane (Θ=90^∘) the infrared bands are generally more dominant. This is due to the neutron-rich matter of tidal tails located at low latitude, which absorbs most of the radiation at high frequencies. In contrast, for an observer at the pole (Θ = 0^∘), the ultraviolet and optical bands are brighter in the first two days. However, these diminish rapidly, and at later times the red and infrared bands dominate the kilonova signal here as well. Accordingly, a blue kilonova will be observed in the first days, shifting to the red spectra in the following days. These observations indicate again the need for quick follow-up observations of GW signals with upcoming UV-satellites, e.g., <cit.>.
§ CONCLUSIONS
In this article, we implemented a gray M1 multipolar radiation transport scheme following <cit.> in the BAM code. The main features of the implementation are summarized in Tab. <ref>.
We performed a series of standard tests: transport along lightlike geodesics in vacuum, absorption by static and moving fluid, advection by a moving fluid in the scattering-dominated regime, and emission by a thick uniform sphere.
The main difficulty was to properly account for the collisional sources implicitly and to suppress artificial dissipation in the trapped regime in order to capture the correct diffusion rate. We show that our implementation is able to correctly handle all these regimes employing linearized implicit sources of <cit.> and the flux reconstruction of <cit.>.
In addition, we also performed simulations of a single, isolated, hot neutron star. In this case, both the spacetime and the fluid are dynamically evolved. Opacities are motivated by nuclear physics theory and computed using the library. In this last test, we show that neutrinos correctly thermalize inside the star, where they form gas in thermal equilibrium with the nuclear matter and decouple at the star's surface at different temperatures according to their species (with the hierarchy T^ν_e_eff < T^ν_e_eff < T^ν_x_eff).
Moreover, we showed neutrinos correctly start developing a non-zero average momentum at the neutrinosphere τ=2/3. In the last part of the article, we simulated four different low-mass BNS configurations using two different EoS and two mass ratios.
Ejecta from our simulations had the following properties: masses of the order of ∼ 10^-3 M_⊙ with ⟨ V_∞⟩ = 0.1c - 0.2c and ⟨ Y_e ⟩ = 0.2-0.4, the latter with a strong dependence on the mass ratio. In general, more asymmetric systems and systems with a stiffer EoS (DD2) produce lower ⟨ Y_e ⟩ due to the larger mass of tidal tail ejecta, with the lowest ⟨ Y_e ⟩ given by the asymmetric DD2 configuration.
We also illustrated that, on average, more asymmetric binaries produce more ejecta with respect to their symmetric counterparts for both EoSs. Softer EoS (SFHo) eject more than stiffer ones due to the more violent impact of the merger.
Overall, the mechanisms we identified in our simulations are consistent with those reported in the literature for the dynamical ejecta.
Moreover, similar to <cit.>, we found a neutrino wind ejecta component in the polar region during the whole duration of the simulation, albeit with decreasing matter flux. Such a component is significantly more important for softer EOSs, in our case SFHo, due to the higher outflow of neutrino energy. It can contribute up to 50% of the total ejecta mass and significantly increase ⟨ Y_e ⟩. This component could get even more dominant if the simulation is run for longer.
All our simulations produce a MNS remnant surrounded by a massive, neutrino-thick disk with baryonic mass M_ disk∼ 10^-1 M_⊙. The mass of the disk increases with the mass ratio for SFHo EoS while having the opposite behavior for the stiffer DD2. The results summarized so far are valid for all resolutions.
Finally, we used our new ejecta analysis tools to employ our NR-extracted ejecta properties as inputs for the codes and , which we used to compute nucleosynthesis yields and kilonova lightcurves, respectively. The use of Y_e obtained directly from the NR simulations produces much more realistic results with respect to the previous assumption based on fluid's entropy that was used in .
We plan to use the implementation described in this article as the standard for our future BNS simulations oriented to the study of ejecta properties and post-merger dynamics and of the associated kilonova light curves and nucleosynthesis yields.
§ ACKNOWLEDGEMENTS
We thank H. Andresen, S. Bernuzzi, B. Brügmann, F. Foucart, E. O'Connor, M. Shibata, and W. Tichy for helpful discussions.
FS and TD acknowledge funding from the EU Horizon under ERC Starting Grant, no. SMArt-101076369. TD and AN acknowledge support from the Deutsche
Forschungsgemeinschaft, DFG, project number DI 2553/7.
TD and VN acknowledge support through the Max Planck Society funding the Max Planck Fellow group
`Multi-messenger Astrophysics of Compact Binaries'. HG acknowledges funding by FAPESP grant number 2019/26287-0. MU acknowledges support through the UP Reconnect Program from the Alumni Researcher Program of the University of Potsdam.
The simulations were performed on the national supercomputer HPE Apollo Hawk at the High Performance Computing (HPC) Center Stuttgart (HLRS) under the grant number GWanalysis/44189, on the GCS Supercomputer SuperMUC_NG at the Leibniz Supercomputing Centre (LRZ) [project pn29ba], and on the HPC systems Lise/Emmy of the North German Supercomputing Alliance (HLRN) [project bbp00049].
§ LINEARIZED IMPLICIT TIMESTEP SOLUTION
The projections of tensors A^α and B^α_i of Eq. (<ref>) perpendicular to the spacelike hypersurface Σ_t can be written as:
n^αA_α = k_a A_(J) - (k_a+k_s) A_(H),
with
A_(J) = W[W^2 + a W^2 (v · f)^2 + b W^2-1/2 W^2+1(3-2 W^2)],
A_(H) = W[-1 + W^2 + a W^2(v · f)^2 + b W^2-1/2W^2+1(3-2W^2)],
and
n^αB^i_α = k_a B^i_(J) - (k_a+k_s) B^i_(H),
where we define
B^i_(J) = W[-2W + bW^2-1/2W^2+14W^2]v^i,
B^i_(H) = W [1 - 2W^2 + bW^2-1/2W^2+14W^2]v^i.
While for the parallel component, we have
γ^α_i A_α = k_a A_i,(J) - (k_a+k_s) A_i,(H),
with
A_i,(J) = -W[W^2 + aW^2(v · f)^2 + bW^2-1/2W^2+1(3-2W^2)]v_i,
A_i,(H) = -[W^3 + aW^3(v · f)^2 + b WW^2/2W^2+1(3-2W^2)]v_i
- a W(v · f) f_i,
and
γ^α_i B_α^j = k_a B^j_i,(J) - (k_a + k_s) B^j_i,(H),
with
B^j_i,(J) = W[2W^2 - bW^2-1/2W^2+14W^2]v_iv^j,
B^j_i,(H) = [2W^3 - bWW^2-1/2W^2+14W^2 -
-bW/2W^2+1(2W^2-1)]v_iv^j
+ (1-bv^2)Wδ_i^j,
where a = (3χ-1)/2 and b = 1- a are the thin and thick closure coefficients respectively and f^i=F^i/|F|.
Using these projections we can write Ẽ and F̃_̃ĩ at time n+1 as:
F̃_i^n+1 = (M^-1)_i^j S_j,
Ẽ^n+1 = 1/1 + αΔ t n^α A_α [ Ẽ^n + Δ t (-∂_i ℱ^i_E +G_E)
+αΔ t (η√(γ) W - n^αB^i_αF̃_i^n+1) ],
with
M_i^j = δ_i^j - αΔ t γ_i^α B_α^j + α^2 Δ t ^2/1+αΔ t n^αA_α A_αγ^α_i n^βB_β^j,
and
S_i = F̃_i^n + Δ t ( -∂_j ℱ^j_F_i + G_F_i)+ Δ t [ α√(γ)η W v_i .
. +α A_αγ_i^α/1+αΔ t n^αA_α ( Ẽ^n +Δ t (-∂_i ℱ^i_E + G_E) + αΔ t √(γ)Wη ) ],
where G_E and G_F_i represent the gravitational sources of Eq. (<ref>) and Eq. (<ref>), respectively, computed using Ẽ^n and F̃_i^n.
Since M_i^j and S_i only depend on variables at time n, F̃_i^n+1 must be first computed and then plugged into the expression of Ẽ^n+1 to complete the solution.
§ HAMILTONIAN CONSTRAINT VIOLATION
In Fig. <ref>, we show the L_2 norm of the Hamiltonian constraint as a function of the time. The latter follows the same qualitative evolution as in Ref. <cit.>. When initial data are interpolated from sgrid, the Hamiltonian constraint is of the order of 10^-8. The evolution with the Z4c formulation reduces this value order 10^-10 due to its constraint-damped properties. At merger time, the value increases, due to the formation of shocks in the hydrodynamics variables, reaching a peak shortly after. After the peak, the value decreases and stabilizes between 10^-9 and 10^-10. Simulations including neutrino transport systematically show a bigger violation of the Hamiltonian constraint after the merger. One of the reasons might be that, as common in the literature, the neutrino's stress-energy tensor is not included in the matter term of the spacetime evolution equations. This leads to a mathematical violation of General Relativity constraints proportional to neutrino's stress-energy tensor. However, we can observe that the value of the Hamiltonian constraint is always lower than its initial value.
§ EJECTA DATA FOR
Given the limited length of our simulations and the issue of covering both early-time and postmerger ejecta with individual snapshots, we employ 3D snapshots together with information from the detection sphere at r ≃ 450 km. The detailed procedure is as follows:
* We find the latest 3D snapshot in which all the ejecta is contained within the simulation domain. We mark the time of this snapshot as t_ cut. From it, we cut out the matter still contained within the detection sphere. This component includes most of the ejecta mass, including the tidal tails and the shocked component.
* We rescale the ejecta from the previous step assuming homologous expansion the same way does, i.e., assuming every fluid element moves with a constant velocity v^i = x^i/(t-t_ merger). This is equivalent to defining a scale factor α(t) = (t - t_ merger)/ (t_ cut - t_ merger) and rescaling coordinates and mass density as x^i →α(T) x^i, ρ→ρ / α^3(T), where T is the final time of the simulation. After this step, the radius of the inner cut (initially corresponding to the detection sphere) moved outwards, leaving a gap between the ejecta and the detection sphere that we are going to fill using data from the sphere itself.
* From the sphere we select data with t ∈ [t_ cut, T]. Assuming homologous expansion like for the 3D data, we can map the time into a radius by R(t) = r (T-t_ merger)/(t-t_ merger)=r α(T)/α(t) where r is the fixed coordinate radius of the detection sphere. At the same time, we rescale the mass density by ρ(t,θ,ϕ) →ρ(t,θ,ϕ) (α(t)/α(T))^3. After this procedure, we will have the ρ(R,θ,ϕ), and we interpolate it into the Cartesian grid, where the ejecta from 3D data is defined. This way, we fill the gap between the ejecta and the detection sphere left by the previous rescaling step.
It is important to point out that the ejecta at the detection sphere is not fully homologous, and assuming a constant velocity with v^i = x^i/(t-t_ merger) might introduce biases. This is due to the different velocities of components ejected at different times, with shock ejecta that is faster than tidal tails, although it is ejected later. Although deviations from homologous expansion are shown to be present even at 𝒪(100 ms) after the merger <cit.>, it has been shown that their influence for the light curve computation using is negligible, i.e., within the range of Monte Carlo noise, if the ejecta is extracted at t > 80 ms after the merger <cit.>. (In <cit.>, only the dynamical ejecta was included, and GRHD simulations were performed without the evolution of the electron fraction. The inclusion of other ejecta components or neutrino radiation probably leads to a delay in reaching the homologous phase.)
Because of this reason, we let the ejecta evolve as long as possible out of the detection sphere before assuming homologous expansion and starting the procedure described above. In order to alleviate the issue, an even longer evolution would be required to produce accurate lightcurves.
The resulting input data for the radiative transfer simulations are shown in Fig. <ref> and Fig. <ref>.
In Fig. <ref>, we present maps in the v_y-v_z plane of the matter density ρ, electron fraction Y_e, and temperature T used in and computed at 1 day after the merger for all four BNS systems using the M1 scheme. In addition, we show in Fig. <ref> the distribution of density and electron fraction in the v_x-v_y plane to show how the configurations deviate from axisymmetry.
|
http://arxiv.org/abs/2307.06277v1 | 20230712162008 | Stochastic Light Field Holography | [
"Florian Schiffers",
"Praneeth Chakravarthula",
"Nathan Matsuda",
"Grace Kuo",
"Ethan Tseng",
"Douglas Lanman",
"Felix Heide",
"Oliver Cossairt"
] | cs.CV | [
"cs.CV",
"cs.GR",
"eess.IV",
"physics.optics"
] |
The Visual Turing Test is the ultimate goal to evaluate the realism of holographic displays.
Previous studies have focused on addressing challenges such as limited étendue and image quality over a large focal volume, but they have not investigated the effect of pupil sampling on the viewing experience in full 3D holograms.
In this work, we tackle this problem with a novel hologram generation algorithm motivated by matching the projection operators of incoherent (Light Field) and coherent (Wigner Function) light transport.
To this end, we supervise hologram computation using synthesized photographs, which are rendered on-the-fly using Light Field refocusing from stochastically sampled pupil states during optimization.
The proposed method produces holograms with correct parallax and focus cues, which are important for passing the Visual Turing Test.
We validate that our approach compares favorably to state-of-the-art CGH algorithms that use Light Field and Focal Stack supervision.
Our experiments demonstrate that our algorithm improves the viewing experience when evaluated under a large variety of different pupil states.
Computational Display, Holography, Light Field, Wigner Distributions, Near-Eye Display, VR/AR
15pt
Anonymous ICCP 2023 submission ID
Tackling Computational Heterogeneity in FL:
A Few Theoretical Insights
Paper ID This paper is under review for ICCP 2023 and the PAMI special issue on computational photography. Do not distribute.
=================================================================================================================================
§ INTRODUCTION
IN 1972, the Cartier jewelry store on 5th Avenue in New York City displayed an analog hologram of a hand-holding jewelry on their window storefront.
This hologram was so realistic that an elderly woman passing by attempted to attack the virtual arm floating in mid-air <cit.>.
Large-format analog holograms are known for their incredible realism and ability to give the impression of a complete 3D picture frozen in time.
For static objects, full-color holography has been shown to provide realism on par with the visual inspection of the actual object.
As such, holography is often seen as the most likely approach to pass the Visual Turing Test <cit.>.
However, the dynamic display of high-quality digital holographic 3D imagery has proven challenging as a result of the limited system étendue.
Etendue is determined by the Space-Bandwidth-Product (SBP) of the modulator, which is equal to the product of the maximum diffraction angle supported by the modulator and its modulation area.
Most existing holographic systems rely on phase-only spatial light modulators (SLM), where this angle is limited by the pixel size – today's fabrication techniques achieve 8μm pixel pitch for phase SLMs with no more than 4^∘ of diffraction while limiting the eyebox to the size of the device.
As a result, the available SBP mandates a trade-off between FOV and eyebox size that impacts the number of achievable angular views.
The SBP required for holographic near-eye displays with a large eye box (roughly 10 mm) and large FoV ( 100 degree), is still so high that pupil steering using eye tracking might be necessary unless étendue-expansion approaches <cit.> make significant progress.
Problem Statement.
Even with novel designs, achieving and utilizing a large enough SBP to fully realize the potential of holographic displays remains a challenge.
Specifically, a key question is how to compute holograms such that they provide an optimal image for any possible viewpoint.
This problem is amplified for large SBP systems with large eyebox size as the pupil can move significantly within.
Pupil movement, including changes in location, size, and defocus, can significantly affect image quality on holographic displays <cit.>.
Accurately accounting for pupil movement is, as such, a critical challenge in the development of photo-realistic holographic displays, as it can greatly affect the realism and immersion of the viewing experience (see Fig. <ref>). Only recently, researchers have investigated this issue for 2D holography <cit.> and showed only very preliminary 3D results using a two-plane representation with two objects. Hence, the computation of pupil-aware 3D holograms is an open problem that we address in this paper.
Proposed Solution.
In this work, we address the challenge of computing high-quality holograms that accurately reproduce vision cues during pupil movement.
To do this, we propose a new Stochastic Light Field Holography (SLFH) algorithm that generates holograms using a loss function motivated by matching the photographic projection operators of incoherent and coherent light.
Our approach ensures optimal performance for a random assortment of pupil states and, unlike previous work, produces 3D imagery while also avoiding overfitting to specific viewing conditions.
While previous methods use a loss function based on a set of static, prerendered images, we supervise our loss function by synthesizing novel views using Light Field refocusing <cit.> from arbitrary pupil states.
By sampling pupil states stochastically, we ensure, unlike existing work, that the optimized hologram provides correct parallax, defocus, and depth-of-field cues for all possible pupil states.
In order to have a fair comparison method with multi-plane approaches <cit.>, we introduce a novel Focal Stack supervision algorithm (LF2FS) which ensures a parallax-consistent defocus allowing for a direct comparison to our main contribution, which is SLFH.
Contributions. Our software codebase for stochastic pupil sampling will be made publicly available after publication of this manuscript. Our paper makes the following contributions:
* We are the first to study the image quality of 3D-CGH algorithms for all possible pupil states within the available eye-box volume.
* We are the first to introduce on-the-fly Light Field refocusing <cit.> into the holographic generation process.
We propose a new CGH-algorithm that reproduces parallax and accommodation for arbitrary pupil states by coupling the projection operators for ray-space (Light Field) and wave-optics (Wigner Distribution). We use this to implement both a novel Focal Stack supervision algorithm (LF2FS) and our proposed SLFH method.
* We validate the method in simulation and demonstrate that, while previous techniques overfit to specific pupil states used during training, our method produces the most favorable display for arbitrary viewing conditions (see Fig. <ref>).
* We assess the method with an experimental prototype and demonstrate higher quality parallax information with fewer artifacts than state-of-the-art Light Field supervision CGH optimization algorithms (see Figs. <ref>, <ref>, and Supplemental Material).
§ RELATED WORK
Holography for displays relies on diffraction and interference of light to generate images.
Based on the diffracted field, a hologram can be classified as a far-field Fourier hologram or a near-field Fresnel hologram.
Using phase-only SLMs requires computing phase-only holograms that are capable of producing the diffraction field that can closely mimic the target image. However, the underlying phase retrieval problem is generally ill-posed and non-convex.
Though introduced for Fourier phase retrieval, early methods such as error reduction using iterative optimization <cit.> and hybrid input-output (HIO) methods <cit.> are applicable for both Fourier and Fresnel holograms.
Researchers have also explored phase-retrieval methods using first-order nonlinear optimization <cit.>, alternative direction methods for phase retrieval <cit.>, nonconvex optimization <cit.>, and methods overcoming the nonconvex nature of the phase retrieval problem by lifting, i.e., relaxation, to a semidefinite <cit.> or linear program <cit.>. Several works <cit.> have explored optimization approaches using first-order gradient descent methods to solve for holograms with flexible loss functions.
Focal Stack and RGBD Algorithms.
Instead of computing the wave propagation for millions of points, a 3D object can be represented as a stack of intensity layers <cit.>.
Wave propagation methods such as the inverse Fresnel transform or angular spectrum propagation are typically used for propagating the waves from several layers of the 3D scene towards the SLM plane, where they are interfered with to produce a complex hologram <cit.>.
Although this approach can be implemented efficiently, it cannot support continuous focus cues and accurate occlusion due to discrete plane sampling.
Recent work <cit.> suggests that holograms based on smooth-phase profiles <cit.> cannot support accommodation; the ultimate promise of 3d holography.
An additional discussion on smooth vs random phase holograms is found in the supplementary.
A further approximation to the layer-based methods is to determine the focal depth of the user, i.e., distance of the object to which the user fixates via an eyetracker and adjusting the focal plane of the 2D holographic projection to match the user focal distance <cit.>.
While emulating a 3D scene by adaptively shifting a 2D holographic projection in space is computationally efficient, operating in a varifocal mode under-utilizes the capabilities of a holographic display.
Moreover, achieving natural focus cues and physically accurate occlusion effects still remains a challenge.
To compare against SOTA Focal Stack methods, we introduce a novel Focal Stack supervision algorithm (LF2FS) that generates Focal Stacks on the fly from Light Fields according to physically accurate pupil states.
Light Field Algorithms.
To support occlusion and depth-dependent effects, a Light Field can be encoded into a hologram partitioned spatially into elementary hologram patches, called “hogels” <cit.>, similar to elementary images in a Light Field.
These hogels produce local ray distributions that reconstruct multiple (Light Field) views <cit.>.
Such holograms which encode a Light Field are called “holographic stereograms”.
Conventional stereograms, where hogels are out of phase with each other, suffer from a lack of focus cues and limited depth of field <cit.>.
To keep the hogels of a holographic stereogram in phase throughout the hologram, researchers have introduced an additional phase factor to calculate what is called a phase-added stereogram (PAS) <cit.>.
However, akin to a microlens array-based Light Field display <cit.>, stereograms suffer from the fundamental spatio-angular resolution trade-off:
A larger hogel size leads to a decreased spatial resolution.
This fundamental limitation does not allow for high spatial resolution holographic stereogram projections.
Recent methods have attempted to overcome this trade-off <cit.> via Short-Time Fourier Transform (STFT) inversion.
However, these methods do not match the image quality achieved for 2D holograms <cit.> and suffer from artifacts around object discontinuities due to suboptimal inversion of the STFT.
More importantly, these algorithms do not incorporate pupil models for viewing the hologram, and therefore overfit to viewing conditions where the eyebox and pupil coincide.
In Sec. <ref> will further discuss the Wigner function and how it relates to holographic displays.
We want to stress that we are not the first to consider the Wigner-function for holographic displays <cit.>.
However, these methods use the Wigner-function to establish a closed-form solution similar to <cit.>, which is different from ours, which uses an iterative approach to recover the displayed phase-modulation.
Learning-Based Holography.
Neural networks and deep learning approaches have recently been proposed as tools for optical design and holographic phase retrieval.
Holographic microscopy has been tackled by solving phase retrieval problems using neural networks <cit.>.
In a similar fashion, neural networks have been investigated for learning holographic wave propagation from a large training dataset.
For example, Horisaki et al. <cit.> trained a U-net on a pair of SLM phase and intensity patterns, and predicted SLM phase patterns during inference.
Recently, Eybposh et al. <cit.> proposed an unsupervised training strategy and predicted the SLM phase patterns in real time that produced 2D and 3D holographic projections.
Peng et al. <cit.> and Chakravarthula et al. <cit.> have recently demonstrated camera-in-the-loop (CITL) calibration of hardware using neural networks and high-fidelity holographic images on prototype displays.
Shi et al. <cit.> have demonstrated high-resolution real-time holography with a light weight neural phase-retrieval network that may be suitable for inference on mobile hardware in the future.
Pupil and Parallax-aware Holography.
Very recently, researchers have investigated the effects of pupil sampling on holographic setups.
Chakravarthula et al. <cit.> work is the most similar work related to the investigation in this paper.
Chakravarthula et al. propose a new cost function which enforces a uniform energy distribution over the eye-box by stochastically sampling different pupil positions. However, their approach is limited to 2D-imagery and very preliminary multi-plane images, and, as a result, they did not account for parallax and focus-cues effects.
Methods that operate on RGB-D scenes <cit.> are capable of finding correct accommodation cues by using multi-plane or Focal Stack representations, however, are not able to account for correct parallax cues when the pupil is shifting. Motivated by this, recent work has proposed holographic generation algorithms using Light Field supervision <cit.>, making use of the Short Time Fourier Transform (STFT) and allowing a bidirectional transform between holograms and Light Fields as proposed by <cit.>. All of these existing methods have in common that they do not allow for pupil sampling and accurate parallax cues at the same time.
§ PHOTOGRAPHIC PROJECTION OPERATORS
In this section, we establish a connection between the coherent (wave-optics) and incoherent (ray-space) projection operators.
We first introduce Light Fields, their coherent equivalent using the Wigner-formalism, and their resemblance to STFT approaches used in current literature.
§.§ Eye Pupil Model
The primary contribution of our work is to introduce a simplified eye/pupil model into the hologram forward model for optimization of photorealistic 3D imagery over a variety of viewing conditions. We approximate a near-eye holographic display system as a 4f relay where an eyepiece images an SLM illuminated by monochormatic light through a pupil, followed by an eye lens which images onto a detector (see Fig. <ref>). We represent 2D coordinates in the detector plane as 𝐫=(x,y), and pupil plane as 𝐪=(u,v). The eye is assumed to be focused at infinity in the rest state so that converting from pupil to angular coordinates is given by the relation 𝐪 = tan(θ)/f, where f is the focal length of the eye model or objective lens (f_2 in Fig. <ref>). For simplicity, we parameterize the pupil state p with a set of 4 parameters, that is
p = {𝐬, z, d },
corresponding to focus distance (z), 2D-pupil shift 𝐬=(x,y), and aperture diameter (d). We define the pupil aperture function as a circular function that focuses at infinity in the rest state
A(𝐪;p) =
1, if |𝐪-𝐬| < d
0, otherwise.
§.§ Light Fields
Light Fields represent the flow of radiance in a scene in every direction and every point in space.
This extra information can be used to generate synthetic 2D images from different viewpoints and simulate optical effects like defocus and lens aberrations.
The photography operator is used in digital refocusing <cit.> to render images at different depths from the Light Field under a virtual aperture function.
Specifically, the operator 𝐏_𝐥𝐟 describes the transformation of a 4D Light Field L(𝐫,𝐪) into a synthetic photograph focused through an aperture function A with a depth defined by α. We parameterize the Light Field such that 𝐫 and 𝐪 represent the spatial coordinates of the Light Field on detector and pupil planes, respectively.
The photographic projection operator can then be expressed as the following projection from 4D to 2D
𝐏_lf[L ; p ](𝐫) = ∫ A(𝐪;p) · L(𝐪 - 𝐪-𝐫/α, 𝐪) d𝐪,
Equation <ref> can be thought of as shearing the 4D space, multiplication with the aperture function and a subsequent projection down to 2D.
Note that the effects of changing focus distance z are incorporated into the defocus parameter α = f/(z+f).
§.§ Wigner Function
The projection operator for Light Fields in Eq. <ref> works only for incoherent light as it cannot account for interference effects.
To overcome these limitations, <cit.> used the Wigner Distribution-Function (WDF) to establish the connection to Light Fields.
While it is computationally intractable to use Wigner functions efficiently for large holograms, it provides a powerful tool for understanding optical systems and their limitations.
To illustrate this, let us consider a monochromatic optical field u( 𝐫 ).
The correlation between two points on the field 𝐫 and 𝐫' is given by the mutual intensity function, which operates on a 2D field and then produces a 4D quantity as
J[u(𝐫)]( 𝐫, 𝐫')=⟨ u( 𝐫+𝐫'/2) u^*(𝐫-𝐫'/2)⟩,
where ⟨·⟩ denotes a time-average.
The WDF is then defined as the Fourier transform of the mutual intensity along 𝐫'
W[u(𝐫)]( 𝐫, 𝐪') =∫J( 𝐫, 𝐫') exp(-2 π i 𝐫'·𝐪') d𝐫' .
The spatial frequency coordinates 𝐪' of the WDF can be converted to spatial pupil coordinates 𝐪 via the relation
W(𝐫,𝐪) = W(𝐫,λ f 𝐪') .
Wigner Projection Operator.
We next consider the WDF W(𝐫,𝐪) defined at the plane of a phase-only SLM with displayed phase pattern ϕ(𝐫), producing a diffracted field u(𝐫) = exp(jϕ(𝐫)).
Defining the WDF of the aperture function A(𝐪 ; p) to be W_A(𝐫,𝐪), the WDF that is imaged onto the retina of the eye (assuming unit magnification for simplicity) is
W_o[u(𝐫)](𝐫, 𝐪)=∫ W_A(𝐫, 𝐪_i ) W[u(𝐫)](𝐫, 𝐪-𝐪_i) d𝐪_i
The effect of a focus shift z in Wigner-space can be represented as a shear <cit.> given by
W_z(𝐫, 𝐪) = W_o ( 𝐫 - λ z 𝐪' , 𝐪) = W_o ( 𝐫 - z/f𝐪 , 𝐪) ,
which results in the Wigner projection operator
𝐏_WF [ u(𝐫) ; p ]( 𝐫 ) =
∬ W_A(𝐫,𝐪-𝐪_𝐢) · W[u(𝐫)] ( 𝐪 - 𝐪 - 𝐫/α, 𝐪) d𝐪d𝐪_𝐢 .
Relationship between Wigner and Light Field Projection.
Comparing Eq. <ref> with Eq. <ref>, we observe that the projection geometry is identical while the WDF projection allows us to incorporate interference effects from the eye pupil that naturally arise when viewing a holographic display illuminated with coherent light. Closer inspection reveals that the Wigner projection operator is a generalization of Light Field projection. When the retina detector pixel size Δ is much larger than the diffraction airy disc size Δ >> λ· f/d, then the effect of diffraction from the pupil aperture on the perceived image is negligible.
In this case, the WDF for the aperture is approximately independent of spatial coordinates 𝐫 so that W_A(𝐫, 𝐪) ≈ A(𝐪;p) · W(𝐫, 𝐪).
Hence, the projection operators for Eq. <ref> and Eq. <ref> are identical within a scale factor.
Wigner Projection with Wave Optics.
The similarity between photographic projection operators for Light Fields and WDFs provides an intuition for how to optimize holograms that produce photorealistic imagery over a variety of viewing conditions.
However, while the WDF is an elegant way to describe coherent light transport, it is impractical to compute on large fields <cit.>.
Instead, we implement the projection operator of Eq. <ref> using wavefront propagators via stochastic pupil-sampling as described in Sec <ref>.
§.§ Bidirectional Hologram Light Field Transform
As discussed in Sec. <ref>, a successful branch of LF-CGH algorithms employs STFT-inversion to move between LFs and hologram-domains <cit.>.
The Wigner projections (Sec. <ref>) have striking similarities with the STFT, as we will show in the following.
The STFT of a signal u(𝐫) is defined as the convolution with a window function w(𝐫) such that
STFT [u( 𝐫 )](𝐫 , 𝐪) =
∫ u(𝐫^') w( 𝐫^' - 𝐫) e^-j 2 π𝐪·𝐫^' d 𝐫^'
= ℱ^-1{ U(𝐪') · W(𝐪' - 𝐪) } ,
where U(𝐪') and W(𝐪') are Fourier transforms of the field u(𝐫) and window function w(𝐫), respectively. The relationship between the STFT of the field u(𝐫) and the target lightfield is then
L(𝐫, 𝐪) =| STFT[ u(𝐫) ]( 𝐫, 𝐪)|^2 .
Together, Eqns. <ref> and <ref> express the relationship between STFT-based CGH algorithms and the Wigner projection operator implemented using wave optics as detailed in Sec. <ref>. Implicitly, the STFT computes an image of the hologram as viewed with a pupil function W(𝐪'), which is the Fourier Transform of the STFT window function w(𝐫). Typically a Hanning window or similar function is used to suppress ringing, and a grid of pupil positions is computed by applying a 2D FFT to the windowed field u(𝐫^') w( 𝐫^' - 𝐫).
The method we propose in this work can be thought of as a generalization of STFT-based methods, with a few important differences. First, while the STFT method uses fixed pupils for hologram generation that provide photo-consistency with lightfield views, our method ensures that arbitrary pupil functions applied to the holographic display are consistent with Light Field projection using the same pupil. Second, our method relies on the relationship between spatial frequency coordinates and angular frequency coordinates of the field, which is necessary to produce accurate color reconstructions with multiple wavelengths. Lastly, the proposed method decouples the number of pupil positions from the forward model so that less memory can be used to compute larger holograms.
§.§ Eyebox, Pupils and Speckle
This paper focuses on the problem of developing optimization algorithms for holograms that produce photorealistic 3D imagery over a variety of random pupil states. A necessary requirement for these algorithms is that they produce an eyebox with a relatively smooth distribution of intensity so that there is not a drastic change in perceived brightness as the pupil state changes during hologram viewing. In order for a hologram to produce a uniform eyebox, it must also produce a diffracted field with a relatively even distribution of angular frequencies, which means that the CGH will typically have highly randomized phase. This has implications on the type of CGH algorithm used, as well as the quality of imagery displayed since random phase holograms naturally introduce speckle.
Many recent studies on holography do not address the issue of eyebox uniformity, and it is often assumed that the eye box is solely determined by the system etendue, which defines the maximum diffraction angle given by the SLM-pixel pitch.
However, for most near-field hologram algorithms that have recently reported great image quality <cit.>, the light distribution in the eye-box is heavily concentrated around the DC term.
This concentration of spectral energy leads to a significant reduction in the "effective eye-box size".
This problem arises naturally in many existing holographic generation algorithms.
Double Phase (DPAC) encoding provides a direct method for encoding a complex field onto a phase only SLM, but only works well with smooth phase holograms that contain a Fourier spectrum highly concentrated around the optical axis (DC angular frequency).
Likewise, similar characteristics are observed when using holographic generation based on gradient-descent style algorithms that are initialized with smooth phase (see Fig. <ref>).
Speckles are interference effects that occur when a diffracted field is highly random compared to its effective aperture.
The speckle size on the retina of a pupil sampled hologram is inversely proportional to the pupil diameter d <cit.>.
In this work, we focus on CGHs which are viewed with pupil diameters much smaller than the eyebox (d ∈ [w_eyebox/12, w_eyebox]), which can introduce significant speckle noise in simulated and captured imagery relative to sampling the full eyebox.
To improve image quality and reduce the effect of speckle we introduce incoherent averaging in the form of temporal averaging of 8 frames and spatial averaging of 2 × 2 blocks of pixels.
For fair comparisons, we apply the same incoherent averaging to all algorithms used in this paper.
§ STOCHASTIC LIGHT FIELD HOLOGRAPHY
Building on the projection operators from the previous section, we now introduce the proposed method. We first introduce the forward model that describes the mapping from a displayed phase-modulation ϕ on the SLM to the complex wavefront u, assuming that the image is captured under an arbitrary pupil state. Then, we will introduce the proposed novel CGH-algorithm using Light Field supervision (see Fig. <ref>).
§.§ Enforcing Photo-Consistency
The proposed method enforces photo-consistency between image formation of a known Light Field L(𝐫,𝐪) and a unknown Wigner distribution W[u(𝐫)](𝐫,𝐪), where u(𝐫) = exp(jϕ(𝐫)) is the complex field formed by the holographic display with phase-only slm pattern ϕ(𝐫).
We implement projection operators for both Light Field P_lf[·] (see Eq. <ref>) and the Wigner Distribution P_wd[ ·] (see Eq. <ref>) such that they match the geometry of our holographic display.
The projections of the known Light Field produce a set of target photographs given the viewing parameters
T_i( 𝐫 ) = P_lf[L, p_i] .
We then generate the corresponding Wigner projections corresponding to observations of the diffracted optical field with the same viewing parameters
T̂_i(𝐫, ϕ(𝐫)) = P_wd[exp(jϕ(𝐫)), p_i] .
The optimization objective is then to find the SLM pattern ϕ(𝐫) that solves
ϕ(x)^* = argmin_ϕ∑_i ℒ( T_i(𝐫), T̂_i(𝐫, ϕ(𝐫)))
given p_i={[ x_i ∼𝒰[-r_max, r_max]
z_i ∼𝒰[z_min,z_max]; y_i ∼𝒰[-r_max, r_max]
d_i ∼𝒰[r_min, r_max] ].
where 𝒰 is the uniform distribution and ℒ is an arbitrary loss-function (L_2 in our case).
§.§ Implementation of Projection Operators
Wigner-Projection using Wave Optics.
While the Wigner projection operator of Eq. <ref> is useful to build an intuition for how to optimize a hologram that is radiometrically consistent with a target lightfield, computing the full WDF is impractical because the WDF is a 4D function which results in quartic memory growth. Instead, we use a Fourier optics forward model to stochastically sample pupil positions, resulting in the simplified 2D projection operator that does not require storing a 4D WDF, that is
P_wd[u(𝐫), p](𝐫) = ∫ U(𝐪) K(𝐪, p) e^-j 2 π𝐪·𝐫 d 𝐫.
Here, U is the frequency representation of the phase-only SLM-modulation
U( 𝐪)= ℱ{u(𝐫)}
and the aperture function K incorporates the effects of the pupil parameters p through the the aperture function A, and the propagation kernel ℋ as
K(𝐪, p) = A(𝐪 - 𝐬/λ f , d/λ f)ℋ( 𝐪; z).
The angular spectrum propagation kernel ℋ defined in the frequency domain is given by
ℋ( 𝐪 ; z)= e^i 2 π/λ√(1- ||λ𝐪 ||^2 z), if √(||𝐪||^2)<1/λ,
0 otherwise.
Relationship to Focal Stack and STFT Supervision.
The propagation kernel expressed by Eqn. <ref> is a generalization of both Focal Stack <cit.> and STFT <cit.> supervision applied to hologram optimization. For focal stack supervision, the pupil parameters p=(𝐬,d,z) are fixed so that pupil shift 𝐬=0, the aperture diameter d is equal to the eyebox size, and the depth is discretized into N-layers over z ∈ [z_min,z_max]. As discussed in Sec. <ref>, STFT supervision closely resembles Eq. <ref>, but the kernel K(𝐪,p) is equal to the Fourier Transform of the STFT window W(𝐪), and pupil shift positions are computed over a grid of positions 𝐬∈ [d_min,d_max] that is determined by the window size and FFT algorithm. However, this analysis reveals that a correct implementation of STFT supervision would require the window W(𝐪) to also be used for generating projections of the Light Field following Eq. <ref>. As a result, Eq. <ref> is not strictly valid unless the window function is a jinc function, the 2D polar analog of the sinc function, whose width depends on the wavelength:
w(𝐫) = d/λ fjinc(d 𝐫/λ f),
so that its Fourier transform is a circular aperture:
W(𝐪) = A(𝐪, d/λ f) .
Focal stack and STFT supervision are, therefore, specific cases of supervising with random pupil positions, and, as a result, they overfit to specific pupil conditions, which we show does not generalize well to arbitrary viewing conditions, see in Figs. <ref> and <ref>.
Furthermore, because Focal Stack supervision is a special case of supervising with random pupil states, we introduce a new LF2FS algorithm that computes Focal Stacks on the fly for supervision by varying only the defocus parameter of pupil states and holding the rest constant.
§.§ Implementation
Experimental Setup.
We employ a conventional setup for Near-Eye Hologram as used in <cit.>.
The only addition is a 2D translation and a motorized iris placed in the Fourier-plane of the 4f system to measure parallax and depth-of-field effects.
For exact setup details, we refer to the supplementary.
Implementation details.
Optimization is implemented using ADAM <cit.> using automatic-differentiation following Wirtinger Holography <cit.> with the standard L_2-loss used in all experiments.
Further implementation details on our and the comparison method STFT <cit.> are found in the supplementary.
§ RESULTS
The proposed method aims to find an optimal hologram that is optimized for the best average image quality over a four-dimensional pupil state-space, which includes diameter, location, and focus.
For all the simulations and experimental results in the paper, the pupil parameters are chosen such that the defocus range is z_min=0 mm, z_max=15 mm and the aperture range is d_min=2 mm, d_max=20 mm.
This is slightly smaller than the smallest eye-box size (22mm) that is produced by the blue channel (440nm) under the 400mm focal length that was employed in the setup with an 8um SLM pitch.
Comparison to Smooth-Phase Holograms.
Our paper focuses on algorithms that produce random-phase holograms such that the maximal possible eyebox is filled.
Such holograms inherently exhibit speckle and are further harder to control due to physical effects such as field-fringing <cit.>.
We expect that future research on how to combine <cit.>, with random phase holograms.
Recent holographic literature such as algorithms <cit.> show best-in-class image quality, but rely heavily on smooth-phase to achieve speckle-reduction, which leads to a localized eye-box.
As these holograms are unlikely to achieve real 3D holography <cit.>, we do not compare against them in our main manuscript.
However, we have added further justification and discussion of this in the supplementary materials, including a simulation using our implementation of the method from <cit.> that demonstrates how significant pupil variations are produced by methods that optimize for smooth-phase holograms.
Quantitative Validation (Simulation).
To obtain quantitative results, we optimize our method on 85 different Light Fields from the DeepFocus <cit.> dataset.
This dataset includes a variety of scenes with randomly sampled objects and textures.
Figure <ref> presents a comparison of our method with Focal Stack and STFT-supervision <cit.>.
Our LF2FS Focal Stack supervision method is similar to <cit.>, however there is one significant difference:
We compute our focal stack directly from the Light Field to ensure physically correct defocus cues.
Layered approaches with inaccurate synthetic defocus, create a mismatch between projection operators which will introduce incorrect depth cues.
Our findings validate that the proposed algorithm fares favorably for random pupil states other than those for which STFT-supervision and LF2FS are overfitted for. Our SLFH method produces the best-worse case performance for all pupil states, as the best average case performance for random pupil states.
Furthermore, the variance is significantly reduced for our approaches.
This verifies that our algorithm works better for a large range of viewing conditions.
Experimental Results (Deep Focus).
Experimental results from our prototype experimental holography setup are shown for the Deep Focus dataset in Fig <ref>.
The images are captured with a pupil diameter of 8mm at positions 0mm (back-focus) and 12mm front-focus.
In addition we extracted epipolar images by capturing a 1D-trajectory from left to right using again an 8mm-sized pupil for both front and back focus.
Additionally, we encourage the reader to view the videos found in the Supplementary Material as differences between the result are best visible in animation.
Our SLFH method produces fewer color artifacts than either STFT or Focal Stack supervision.
Experimental and Simulation Results (Robot Scene).
We further evaluate our method on a blender scene in both simulation and experiments.
For experimental results, we evaluated the hologram with a focal stack trajectory as shown in Fig. <ref>.
For this, we use five centered focus positions with a large aperture (80% of eyebox) and sample the volume in equidistant steps from 0mm to 12mm.
We further evaluate the hologram in simulation in Fig. <ref> for five randomly chosen pupil positions.
The results validate that our stochastic pupil sampling algorithm can optimize holograms with correct parallax information for large variety of pupil states.
Artifacts at the edge of the eye-box
In the supplementary video, we show the parallax that is created by one hologram when we sample the eyebox from left to right.
In those, one can see that STFT is doing particularly poor at the edges of the eyebox.
This is because in order to make small propagation distances work, we had to introduce an additional bandpass filter for STFT/LF2FS, which allow some energy to diffract into no-care-areas.
Without this adaption, STFT cannot form meaningful images as shown in <ref>.
Our proposed SLFH algorithm doesn't suffer from this and the complete eyebox can be used during optimization.
Large SBP (16k x 16k) Holograms in Far-Field Configuration.
Current state-of-the-art displays are constrained by the etendue, which is determined by the product of the number of pixels and their pixel-pitch, also known as the Space Bandwidth Product (SBP).
As a result, there is a trade-off between parallax over a larger eye box and resolution/defocus.
This is a practical limitation for our prototype constrained by current technology, but it is not a fundamental limitation to our proposed algorithm.
With advances in technology, we can expect improvements in display technology as well as system design <cit.>
As the system etendue increases, this results in need to perform large matrix multiplications (FFTs), which can quickly exhaust available GPU memory.
This problem can be mitigated by optimizing for Far-Field Holograms where the angular spectrum corresponds to the field created by the SLM.
With a small change in the forward model and the same loss-function, we can directly supervise a large SBP hologram by stochastically sampling pupils over the Fourier-domain.
Our pupil-aware Far-Field approach is an extension to <cit.> who discuss iterative algorithms in Far-Field configuration with either multi-layer or a light field loss similar to STFT <cit.> with a fixed pupil grid.
These works limit their discussion to only a fixed set of pupils, while we outline how Far-fields holograms with photometric consistency over the whole pupil space.
We show an example of such a simulation for a 16k x 16k hologram in Fig. <ref>.
§ DISCUSSION
We propose investigating the issue of CGH-optimization from a machine-learning perspective, specifically with regard to overfitting.
Given that the capacity of a hologram is constrained by the available SBP, there is no single, optimal solution.
In the context of our proposed approach of stochastic light-field holography optimization, the objective is to identify a hologram that yields the highest average image quality across the entire possible space of pupil states.
Our findings indicate that, while overall correct vision cues are maintained across the entire state-space, performance is inferior when evaluated specifically at the pupil states for which other algorithms were optimized.
For instance, a focal-stock loss approach may achieve near-perfect reconstruction at the depths for which it was optimized for, but fails to accurately reproduce parallax and depth-of-field effects.
Similarly, the STFT-method demonstrates superior performance for given pupil positions around its stand-off distance, but experiences a significant decline in image quality when evaluated at larger propagation distances.
Relation to Ptychography.
We would like to note that our method has a striking resemblance with a different technique known from Scientific Imaging called X-Ray Ptychography <cit.>.
There, the forward model and optimization are quite similar as a complex wavefront is reconstructed from intensity-only projections sampled at different, overlapping pupil positions.
One can think of X-Ray Ptychograpy as a dual problem in imaging compared to the holographic display problem.
Furthermore, X-Ray Ptychography often deals with very high resolutions, which requires highly parallelized, distributed optimization frameworks <cit.> that significantly accelerate computation.
Such an increase in performance could potentially enable the development of human-sized, holographic displays.
§ FUTURE WORK
Light-Field Representation.
In this study, we employed a traditional representation of a Light Field and applied the shift-and-add algorithm to synthesize new views.
However, the accuracy of the Light Field representation is constrained by its angular and spatial resolution.
Due to limitations in terms of memory, we encountered challenges such as aliasing during refocusing (details in Supplementary).
There are approaches in recent Graphics research to overcome these limitations <cit.>.
Among those are removing angular aliasing <cit.>, directly synthesize refocused images using neural-networks <cit.>, super-resolve in angular resolution using prior information <cit.> or fast neural-radiance fields <cit.> storing a highly compressed light-field.
Speckle Reduction.
When compared to our simulation, the image quality of our experimental results is significantly lower.
This is due to unmodeled physical effects such as cross-talk in the LCOS-SLMs.
Although camera-in-the-loop methods have been proposed, physically accurate calibration with full-random phase holograms is a novel topic <cit.> and further research is needed.
Notably, speckle remain a fundamental challenge to holographic Light Field displays.
We use time averaging to reduce speckle artifacts, and implemented our algorithms at video rates with new high speed SLMs is a promising direction for future work.
Likewise, exploring other methods of speckle reduction <cit.> via incoherent averaging using partially coherent illumination <cit.> or subjective speckle optimization <cit.> is also a promising avenue of exploration.
Temporal Multiplexing.
Recent papers propose to use temporal multiplexing to either do simultaneous color <cit.> or reduce the overall required framerate <cit.> using joint optimization techniques.
future work could investigate how such approaches can be applied to our SLFH-algorithm to further reduce the number of temporal frames required to suppress speckle noise.
Etendue Expanders.
Etendue for holographic displays remains impractically small due to the limited pixel count of commercially available SLMs. A further promising direction for future work is to explore our stochastic pupil-sampling cgh algorithms together with etendue expanders <cit.> that can help meet AR/VR requirements for combined eyebox size and FoV <cit.>.
§ CONCLUSION
We introduce a new optimization method for generating light-field holograms using stochastic pupil-sampling within a Gradient-Descent type optimization.
Simulation results demonstrating the effectiveness of the proposed method in near-eye display and large etendue setups in far-field configurations were presented.
Experimental results validate that our algorithm is able to provide accurate depth cues such as parallax and defocus under a large variety of different pupil states. All our findings confirm that the proposed method produces the best-worse case performance for all pupil states, as well as the best average case performance for random pupil states – corresponding to average viewing conditions.
IEEEtran
Florian Schiffers is a Ph.D. candidate at Northwestern University and has undertaken the research of this paper during a research internship at Reality Labs Research, Meta.
He received dual M.Sc. degrees, in Physics and Advanced Optical Technologies, from FAU Erlangen in Germany in 2017. Prior to his ongoing doctoral study, Schiffers held a research assistant role at Siemens Healthineers.
Praneeth Chakravarthula
is a research scholar at Princeton University in the Princeton Computational Imaging Lab.
He obtained his Ph.D. from UNC Chapel Hill on novel near-eye displays for virtual and augmented reality.
His research interests lie at the intersection of optics, perception, and graphics.
Prior to joining UNC, he obtained my B.Tech and M.Tech degrees in Electrical Engineering from IIT Madras.
Nathan Matsuda
is a research scientist at Meta developing computational camera and display systems that support immersive visual experiences in VR.
Grace Kuo
is a research scientist at Reality Labs Research, Meta working on the joint design of optical hardware and algorithms for imaging and display systems. She earned her PhD from the Department of Electrical Engineering and Computer Sciences at UC Berkeley, advised by Dr. Laura Waller and Dr. Ren Ng.
Ethan Tseng
is currently a Ph.D. candidate in Computer Science at Princeton University, supervised by Prof. Felix Heide. He received his B.S. in Electrical and Computer Engineering from Carnegie Mellon University.
Douglas Lanman
is the Senior Director of Display Systems Research at Reality Labs Research, Meta, where he leads investigations into advanced display and imaging technologies.
His prior research focuses on head-mounted displays, glasses-free 3D displays, light field cameras, and active illumination for 3D reconstruction and interaction.
Douglas holds a B.S. in Applied Physics from Caltech, and M.S. and Ph.D. degrees in Electrical Engineering from Brown University.
Felix Heide is an assistant professor at Princeton University, where he leads the Computational Imaging Lab.
He obtained his BS and MS from the University of Siegen and his PhD from the University of British Columbia, where his doctoral dissertation won the Alain Fournier PhD Dissertation Award and the SIGGRAPH Outstanding Doctoral Dissertation Award.
He completed his postdoc at Stanford University and has been at Princeton since 2020.
Oliver S. Cossairt
is a research scientist at Reality Labs Research, Meta and an Associate Professor in the Electrical Engineering and Computer Science Department at Northwestern University. He holds an M.S. from the MIT Media Lab, and a Ph.D. in Computer Science from Columbia University. With a background as an Optical and Software Engineer at Actuality Systems, Oliver has received several accolades, including the NSF Graduate Research Fellowship, ICCP Best Paper and Honorable Mention, and an NSF CAREER Award.
|
http://arxiv.org/abs/2307.04385v1 | 20230710074314 | Growing Fast without Colliding: Polylogarithmic Time Step Construction of Geometric Shapes | [
"Nada Almalki",
"Siddharth Gupta",
"Othon Michail"
] | cs.DS | [
"cs.DS",
"cs.CG",
"cs.RO"
] |
=1
obsObservation
mylistenvenumerate3
mylist[1]
[mylistenv] leftmargin = 2, label=#1mylistenvi,ref=#1mylistenvi
[mylistenv,2]label=#1mylistenvi.mylistenvii,ref=#1mylistenvi.mylistenvii
[mylistenv,3]label=#1mylistenvi.mylistenvii.mylistenviii.,ref=#1mylistenvi.mylistenvii.mylistenviii
mylist
plainurl
Growing Fast without Colliding
Department of Computer Science, University of Liverpool, [email protected]
Department of Computer Science, University of Warwick, [email protected]
Department of Computer Science, University of Liverpool, [email protected]://orcid.org/0000-0002-6234-3960
N. Almalki, S. Gupta, and O. Michail
Almalki, Gupta, Michail
[100]Theory of Computation → Computational Geometry; Theory of Computation → Design and analysis of algorithms
2
Growing Fast without Colliding: Polylogarithmic Time Step Construction of Geometric Shapes
Othon Michail
August 12, 2023
==========================================================================================
Building on two recent models of Almalki and Michail <cit.> and Gupta et al. <cit.>, we explore the constructive power of a set of geometric growth processes. The studied processes, by applying a sequence of centralized, parallel, and linear-strength growth operations, can construct shapes from smaller shapes or from a singleton exponentially fast. A technical challenge in growing shapes that fast is the need to avoid collisions caused, for example, when the shape breaks, stretches, or self-intersects. We distinguish two types of growth operations —one that avoids collisions by preserving cycles and one that achieves the same by breaking them— and two types of graph models. We study the following types of shape reachability questions in these models. Given a class of initial shapes ℐ and a class of final shapes ℱ, our objective is to determine whether any (some) shape S ∈ℱ can be reached from any shape S_0 ∈ℐ in a number of time steps which is (poly)logarithmic in the size of S. For the reachable classes, we additionally present the respective growth processes. In cycle-preserving growth, we study these problems in basic classes of shapes such as paths, spirals, and trees and reveal the importance of the number of turning points as a parameter. We give both positive and negative results. For cycle-breaking growth, we obtain a strong positive result —a general growth process that can grow any connected shape from a singleton fast.
§ INTRODUCTION
In recent years, the connection between algorithmic frameworks and the natural world has become increasingly evident and is opening up new research avenues. The principles and mechanisms underlying biological systems can be often modeled using computational approaches. This has led to the development of new computational frameworks and models inspired by biological systems. Examples are brain computation <cit.>, passively-dynamic systems <cit.>, and mobile robotics <cit.>. Recent research on programmable matter <cit.> is concerned with the algorithmic control of physical properties of programmable materials, such as their shape.
A set of recent models in the theory of DNA self-assembly and reconfigurable robotics have attempted to incorporate the concept of growth, which is a fundamental process in organisms. The processes that can be described in those models mimic the process of growth and development in biology. This, on one hand, enables the efficient algorithmic construction of complex shapes and structures and on the other might give insight into some of the algorithmic properties underlying biological systems.
Advances in geometric algorithms have led to significant progress in the theory of modular robotics and self-reconfigurable systems. The underlying systems consist of small, simple, and interchangeable components that can reconfigure themselves into various shapes and structures <cit.>. The efficient construction of geometric shapes is an important algorithmic objective in this context. This work, building on the models of Almalki and Michail <cit.> and Gupta et al. <cit.> further explores the algorithmic and structural properties of geometric growth processes.
§.§ Our Approach and Contribution
We explore the properties of a growth process that was proposed and largely left open in <cit.>.
It is the most general of the growth processes studied in <cit.> and the one in which there is no a priori restriction on the set of nodes that can grow in a given time step. Two different types of this process and its underlying growth operations can be identified: cycle-preserving growth and cycle-breaking growth. Intuitively, the former avoids collisions by preserving cycles, and the latter achieves the same by breaking them. For these two types of growth processes, the present study revolves around the following types of shape-reachability problems:
Given a class of initial shapes ℐ and a class of final shapes ℱ, determine whether any (some) shape S ∈ℱ can be reached from any shape S_0 ∈ℐ in a number of time steps which is (poly)logarithmic in the size of S. In case of a positive answer, we additionally want to provide the respective growth process.
All studied processes and constructions in this paper are centralized. We typically solve a given instance of the problem by designing a parameterized growth process —i.e., a centralized schedule of parallel growth operations— that works for all pairs of input-output shapes in the respective classes. Lower bounds for specific classes of shapes are established by proving that any growth process would fail to be efficient for all pairs of input-output shapes drawn from these classes. Distributed solutions fall beyond the scope of the present paper and form an interesting direction for future research. The main reason for adopting a centralized perspective is that both the centralized and distributed properties of such processes remain largely unexplored and the centralized is a more natural starting point. Centralized lower bounds immediately hold in the distributed case and centralized upper bounds can hint first —possibly inefficient— distributed solutions.
Collision avoidance is a core technical challenge in coming up with exponentially fast growth schedules. Note that if the requirement to avoid collisions —and a few other modeling assumptions related to collisions— was dropped, it would become straightforward to grow some classes of shapes that are otherwise hard to grow fast. For example, any spanning tree —and consequently any connected shape with such a spanning tree— having a bounded number of turning points on every root-to-leaf path could be grown as follows. We would first grow the tree of turning points by a parallel BFS, each time step t generating the turning points at turning-point-distance t from the root. This is linear in the maximum number of turning points on a path and possibly violates the requirement of nodes being collocated. We would then grow in parallel all segments between consecutive turning points to grow the tree to its final size. The latter can be done in time logarithmic in the length of the longest segment. Again, parallel growth could cause intersections between branches of the tree that we have now ignored. Overall, we would pay a logarithmic number of time steps. It will become evident that in the presence of collisions —and it is necessary to take collisions into account for practical implementations— more elaborate approaches are needed to get fast growth of shape classes as basic as paths and trees.
For cycle-preserving growth, in both the adjacency and connectivity graph models, we show that different graph classes can be constructed within (poly)log n time steps, n being the size of the final shape throughout.[It is important to note that we employ two distinct notions of time. The first refers to the time steps involved in the growth process, while the second refers to the running time of a centralized algorithm responsible for determining reachability between shapes and providing corresponding schedules. To maintain clarity, we will consistently differentiate between these two concepts, referring to the former as time steps and the latter as time.]
For path shapes characterized by a parameter k, which represents the number of turning points on the path, we prove that Ω (k log k) time steps are required to grow them from a singleton.
For cycle-breaking growth, our main contribution is a general algorithm that gives a growth schedule for any connected shape from a singleton. All schedules generated by the algorithm reach their final shape exponentially fast. We also study the weaker version of the shape-reachability problem and prove that any connected shape can be transformed into a tree within two time steps only.
In Section <ref>, we formally define the considered growth models and problems. In Section <ref>, we present our results for cycle-preserving growth in the adjacency graph model (Section <ref>) and the connectivity graph model (Section <ref>). In Section <ref>, we study the cycle-breaking type of growth processes. A weaker type of reachability is discussed in Section <ref>. In Section <ref>, we conclude and give further research directions opened by our work.
§ MODELS AND PRELIMINARIES
§.§ The Growth Models
The models studied in this paper build on the models of <cit.> and <cit.>. We consider a 2-dimensional square grid. Each grid point
is identified by its x and y coordinates, where x ≥ 0 indicates the column and y ≥ 0 indicates the row. A shape S is defined by a set of nodes and a set of connections between the nodes. Each node u occupies
a grid point (u_x,u_y) and is represented by a circle drawn on that point.
For a set of nodes V, two nodes u=(u_x, u_y) and v=(v_x, v_y) in the set are adjacent if u_x∈{v_x-1,v_x+1} and u_y=v_y or u_y∈{v_y-1,v_y+1} and u_x=v_x, that is if they are one orthogonal distance apart.
Nodes can only be connected —in which case we also call them neighbors— if they are adjacent. We consider two models of shape connectivity. One is based on the adjacency graph and the other on the connectivity graph, which can be any subgraph of the adjacency graph. For a shape S defined by the adjacency graph on a set of nodes V, we have S=(V,A) where A={uv | u,v∈ V and u,v are adjacent}.
For a shape S defined by a connectivity graph on a set of nodes V, we have S=(V, E) where E⊆ A. A shape S is connected if its graph is a connected graph. We restrict attention to connected shapes. We use n or |S| to denote |V|, i.e., the total number of nodes in a given shape S=(V,E).
Any connected shape S on the grid defines an (orthogonal) polygon that forms the external boundary of S. By the Jordan curve theorem <cit.>, the external boundary of S partitions the grid into an interior and an exterior of S. If a set of points H is a subset of the interior of S and shares no point with the external boundary of S, then we call H fully/strictly enclosed in the external boundary of S. Given a connected shape S, a hole of S is a maximal connected shape of unoccupied points H, strictly enclosed in the external boundary of S. A connected shape S with no holes is called compact. A row (column) of a shape S is the set of all nodes of S with the same y coordinate (x coordinate, resp.).
A growth operation (also called doubling in <cit.> and expansion in <cit.>) applied on a node u of a shape S, generates a new node in one of the points adjacent to u and possibly translates some part of the shape.
In general, applying one or more growth operations to a shape S either causes a collision or yields a new shape S'. There are two types of collisions: node collisions and cycle collisions.
Unless otherwise stated, we shall assume without loss of generality (abbreviated “w.l.o.g.” throughout) that there is an anchor node u_0∈ V that is stationary and other nodes move relative to it. This is sufficient because the constructed shapes are considered to be equivalent up to translations and their final absolute coordinates are not important for our purposes. To simplify the exposition, we first define growth operations for tree shapes and then generalize to any connected shape.
Let the shape be a tree T=(V,E). A single growth operation is applied on a node u∈ V toward a point (x,y) adjacent to u. If point (x,y) is occupied by a node v and uv∉ E then a collision occurs. The remaining cases are (i) (x,y) is empty, (ii) (x,y) is occupied by a node v and uv∈ E. We first define the effect in each of these cases when neighbor handover is not allowed. In case (i), the growth operation generates a node u' at the empty point (x,y) and connects it to u. In case (ii), assume w.l.o.g. that u is closer to u_0 in T than v.
Let T(v) denote the subtree of T rooted at node v. Then, the operation generates a node u' between u and v, connected to both, which translates T(v) by one unit away from u along the axis parallel to uv. After this, u' occupies (x,y) and uv has been replaced by {uu',u'v}. If neighbor handover is allowed, then any neighbor w of u perpendicular to uu' can be handed over to u'. This happens by a unit translation of T(w) or T(u) along the axis parallel to uu', depending on which of u,w, respectively, is closer to u_0 in T.
Let R be a set of operations to be applied in parallel to a connected shape S, each operation on a distinct pair of nodes or a node and an unoccupied point.
We assume that all operations in such a set of parallel operations R are applied concurrently, have the same constant execution speed, and their duration is equal to one time step.
Let T=(V,E) be a tree and u_0∈ V its anchor. We set u_0 to be the root of T. We want to determine the displacement of every v∈ V∖{u_0} due to the parallel application of the operations in R. As u_0 is stationary and each operation translates a subtree, only the operations on the unique u_0v path contribute to v's displacement.
In particular, any such operation contributes one of the unit vectors ⟨ -1,0⟩, ⟨ 0,-1⟩, ⟨ +1,0⟩, ⟨ 0,+1⟩ to the motion vector v⃗ of v.
Moreover, for any node v∈ V that doubles toward an empty point, we add a new node v' with a corresponding unit motion vector v⃗.
We can use the set of motion vectors to determine whether the trajectories of any two nodes will collide at any point. This type of collision is called a node collision (see Figure <ref>).
Let now S be any connected shape with at least one cycle and any node u_0 be its anchor. Then,
a set of parallel operations R on S either causes a cycle collision or its effect is essentially equivalent to the application of R on any spanning tree of S rooted at u_0.
Let u, v be any two nodes on a cycle. If p_1 and p_2 are the two paths between u and v of the cycle, then v⃗_p_1=v⃗_p_2 must hold:
the displacement vectors along the paths p_1 and p_2 are equal.
Otherwise, we cannot maintain all nodes or edges of the cycle. Such a violation is called a cycle collision as shown in Figure <ref>. We call a set of operations that does not cause any node or cycle collisions collision free.
A growth process starts from an initial shape S_0 —often a singleton— and by applying a sequence of parallel growth operations of a given type, goes through a sequence of shapes until it reaches a target shape. The considered growth processes operate in discrete time steps. In each time step t≥ 1, a set of parallel growth operations —possibly a single operation— are applied on the current shape S_t-1 to give the next shape S_t. To simplify our algorithms and w.l.o.g. we require parallel operations to have the same cardinal direction.
This divides time steps into those with horizontal only and those with vertical only motion and implies that a node gets at most one growth operation per time step.
We consider two general types of growth processes, cycle-preserving growth and cycle-breaking growth. Intuitively, the former type avoids cycle collisions by maintaining all cycles affected by growth operations and the latter by breaking them.
A cycle-preserving growth process applies a collision free set of parallel growth operations R_t to shape-instance S_t-1, for all time steps t≥ 1.
A cycle-breaking growth process additionally removes a —possibly empty— subset of the edges of S_t-1 that does not disconnect the shape, before applying R_t to it. If neighbor handover is allowed, growth of a node u generating a new node u' in direction d can hand any neighbor w of u perpendicular to d over to u'. In the adjacency graph model, at the end of each time step t, edge uv is added for all adjacent nodes u,v that are not connected. In the connectivity graph model, no such edges are added.
For the models of Definition <ref>, the following properties hold:
* Under the connectivity graph model, the growth processes never increase the number of cycles.
* Under the connectivity graph model, if S_0 is a singleton, the processes can only construct tree shapes.
* Under both graph models, the cycle-preserving process never decreases the number of cycles.
* Under the connectivity graph model, the cycle-preserving process preserves the number of cycles.
Property (2) is a special case of (1). Property (4) follows by taking (1) and (3) together. So, it is sufficient to prove properties (1) and (3). We first prove these without neighbor handover. In that case, the cycle-preserving process cannot remove any edges and neither do the graph models, thus property (3) holds. Property (1) follows by observing that, without neighbor handover, the growth processes can only add leaves or increase the length of existing line segments and that the connectivity graph model does not modify any edges. We now show that these remain true when neighbor handover is allowed. Let u be a node on which a growth operation is applied, and u_N,u_E,u_S,u_W its up to 4 neighbors in the respective cardinal directions. Let w.l.o.g. u^'_E be the node generated by the operation in the east direction. The nodes that can be handed over from u to u^'_E are u_N and u_S. If we show that the number of cycles is invariant of handover for both types of processes, then propositions (1) and (3) will follow. It is sufficient to consider those cycles that before applying the operation were using edge u_Nu, uu_S or both. If only u_N is handed over to u^'_E then any cycle using u_Nuu_W is replaced by a cycle using u_Nu^'_Euu_W, any using u_Nuu_E by one using u_Nu^'_Eu_E, and any using u_Nuu_S by one using u_Nu^'_Euu_S. The case is which only u_S is handed over is symmetric. If both u_N and u_S are handed over to u^'_E then the only difference is that any cycle using u_Nuu_S is now replaced by one using u_Nu^'_Eu_S. It follows that there is a one-to-one correspondence between previous and new cycles due to neighbor handover, which gives the required invariant.
It is worth noting that the cycle-breaking growth process is independent of whether the shape is represented using the adjacency or connectivity graph model. In both models, cycle-breaking growth follows the same principles and achieves the same results. However, this is not the case for cycle-preserving growth, as it behaves differently depending on the chosen graph model.
The property of neighbor handover is specific to cycle-breaking growth process, where neighboring nodes are transferred during the growth process.
Furthermore, it is important to highlight that any positive results obtained for the cycle-preserving growth process also apply to the cycle-breaking growth process. However, the reverse is not necessarily true, as the behavior and characteristics of the two operations differ.
§.§ Problem Definitions
The following two reachability problems between classes of shapes are defined for all types of growth processes described in Definition <ref>.
Given a growth model, a class of initial shapes ℐ (possibly consisting only of a singleton), and a class of final shapes ℱ we want to determine if there exists a time bound t=O(log n) or t=(poly)log n for which the following holds.
* Strong Reachability: Any shape in ℱ can be grown in the given model within t time steps from any shape in ℐ.
* Reachability: Starting from any shape in ℐ some shape in ℱ can be grown in the given model within t time steps.
For the reachable or strongly reachable classes, we additionally want to give the respective growth processes.
Some of our results concern shapes drawn from special graph classes, such as paths, spirals, and staircases, which we now define.
A node u_i of a path P=⟨ u_1, u_2, …, u_n ⟩ is called a turning point or turn if either i ∈{1,n} or u_i-1u_i is perpendicular to u_iu_i+1. For uniformity of our arguments, we add the endpoints of P to the set of turning points.
A direction of an internal turning point d(u_i) where 1≤ i < n is left if the orientation changes from d(u_i) to d(u_i+1) in a counterclockwise or right if the orientation changes from d(u_i) to d(u_i+1) in a clockwise manner.
A staircase is a path whose tuning points, when ordered from one endpoint to the other, alternate between two clockwise- or counterclockwise- consecutive cardinal directions.
A spiral S is a path whose tuning points, when ordered from one endpoint to the other, follow a continuous and unidirectional sequence of consecutive cardinal directions in either a clockwise or counterclockwise manner.
A fast line growth process begins with a singleton initial shape and by successively doubling all nodes grows a straight path of length n in O(log n) time steps. Fast line growth is used as a sub-process in most of our constructions in order to efficiently grow line segments of a shape. We use the term segment to refer to a line segment of a shape. A fast rectangle growth process —defined similarly— grows any compact rectangular shape of n nodes in O(log n) time steps (see <cit.> for these basic processes).
§ CYCLE-PRESERVING GROWTH PROCESSES
This section presents our results for cycle-preserving growth in the adjacency graph model (Section <ref>) and the connectivity graph model (Section <ref>).
§.§ Cycle-Preserving in the Adjacency Graph Model
We begin with positive results for cycle-preserving growth in the adjacency graph model. Due to cycle-preserving growth being a special case of cycle-breaking growth, positive results for the former immediately hold for the latter.
Assume that a spanning tree T of a shape S has at most k turning points in every root-to-leaf path. Then, if we had access to cycle-breaking growth instead, we could use breadth-first search to grow S in O(klog n) time steps. Starting from the root, all root-to-leaf paths can be grown in parallel. Every such path consists of at most k-1 line segments, each of length at most n, which —by using fast line growth— can be sequentially grown within O(klog n) time steps.
BFS cannot be directly applied by cycle-preserving growth in the adjacency graph model. This is due to the additional cycles that the graph model creates between adjacent segments, making the growth of a segment depend on the growth of segments adjacent to it.
We now describe a variant of BFS that avoids this by treating adjacent segments differently.
* Consider any tree shape T rooted at u_0.
* The process proceeds in phases. In each phase i≥ 1, we will grow all segments at segment-distance i from the root.
We do this by first growing in parallel the horizontal subset of those segments in a horizontal sub-phase i_h, followed by the vertical ones in a vertical sub-phase i_v.
* Each segment L —either horizontal or vertical— of phase i is grown as follows:
* For any sub-segment s of L which is adjacent to a segment s_past grown in a previous phase, grow s by duplicating s_past. Do this in parallel for all these sub-segments. The remaining sub-segments are then grown in parallel using fast line growth.
* For any sub-segment s of L which is adjacent to a segment s_present that will be grown in the same phase i in parallel to s, we use two stages i_h_even and i_h_odd (i_v_even and i_v_odd for the vertical sub-phase). In i_h_even, we grow the even-row segments followed by the odd-row segments in i_h_odd. We then repeat for the vertical sub-phase.
See Figure <ref> for an illustration of this process.
If every root-to-leaf path of a tree T has at most k turns, then the BFS variant grows T in O(k log n) time steps.
To prove the statement we use induction on the number of phases.
For the base case, since L_1 is the only segment at this point, it covers all the paths within distance i=1 in T, so the statement holds.
For the inductive step, let us assume that after i phases, the BFS variant has grown up to the i-th segment of every path within distance i in T, which corresponds to the number of turns k.
Then, in i+1 phase, we grow the line segment L_i+1 of each path within distance i+1. For each sub-segment s of L_i+1 that is adjacent to a sub-segment s_past grown in a previous phase, we can directly grow s by duplicating s_past in one time step.
For any sub-segment s of L_i+1 that is adjacent to a sub-segment s_present that will be grown in the same phase, we use two sub-phases, let us assume w.l.o.g. that it is a horizontal line segment, then we have i_h_even and i_h_odd. In i_h_even, we grow the even-row sub-segments, and in i_h_odd, we grow the odd-row sub-segments. This ensures that adjacent sub-segments in the same phase are grown sequentially without collisions.
By the induction hypothesis, after i phases, we have grown up to the i-th segment of every path within distance i, which corresponds to the number of turns k. Therefore, if T has a bounded number of turns, it can be grown in O(k log n) time steps.
Since all line segments L_1, L_2, L_k,…, L_k+1 are constructed using the BFS variant, the structure of the tree is maintained, and no line segments collide with other line segments during the parallel growth.
It follows that:
If a shape S has a spanning tree with at most k turns in every root-to-leaf path, then S can be grown in O(klog n) time steps.
Let S be any shape with a computed spanning tree T(S). According to Lemma <ref>, T(S) has a bounded number of k turns, implying that it can be grown in log |T(S)| time steps. By growing T(S), we obtain all the vertices of S, and to create the final shape S, we can add the edges between neighboring vertices in the constructed T(S) in a single-time step.
The overall time complexity of growing the shape S can be expressed as O(log |T(S)| + 1). Since |T(S)| is at most the size of S (|T(S)| ≤ |S|), the time complexity can be simplified to O(log |S| + 1). Hence, the time complexity of growing S is O(log |S|).
The next proposition is making use of a fast procedure that fills all holes of a given shape S in order to obtain a compact extension of S.
Given any shape S with at least one hole, there is a sequence of growth operations
of length O(log n) that yields a compact shape S'.
Assume that S has d holes H_1, H_2, …, H_d. We show how a single hole H∈{H_1, H_2, …, H_d} can be filled up in logarithmic time. The statement will then follow by applying this in parallel to all holes.
Hole H is defined by its boundary B(H), which is a closed polygon of nodes and forms an internal boundary of S. The interior H of B(H) is by definition, empty. We show how the nodes of B(H) can be used to efficiently fill H up with nodes.
W.l.o.g., we show how to do this vertically. Let C_1,…, C_k be the consecutive columns containing the empty points of H, say from left to right. Every C∈{C_1,…,C_k} consists of one or more empty vertical segments s_1,…,s_l. Note that all segments defined by the holes of S are pairwise disjoint. Each of the two endpoints of s∈{s_1,…,s_l} is adjacent to a node of B(H). Let (x,y), (x,y+|s|-1) be the bottom-most, uppermost endpoint of s and u, v be its adjacent node from B(H) lying below, above, respectively. Starting from u, we apply BFS-variant introduced at the beginning of this section.
We perform a fast line growth process along [(x,y),(x,y+|s|-1)], which generates a path of length |s| connecting u to v within O(log |s|) time steps. By doing this in parallel for all segments of all holes of S, we can make S compact within O(log (max_s{|s|}))=O(log n) time steps.
By combining Corollary <ref> with Proposition <ref> we get:
Any compact shape S whose perimeter has a bounded number of turns can be constructed in O(log n) time steps.
Consider any shape S with a constant c turns in its perimeter. Let T(S) be a computed spanning tree of S's perimeter. It is important to note that T(S) has a single root-to-leaf path with at most c turns, as per the properties of a spanning tree. Following Corollary <ref>, we can construct S's perimeter within logarithmic time steps using the computed spanning tree T(S). This step ensures that we have constructed the entire perimeter of S accurately and without collision.
Since we consider the adjacency model of S, it involves adding the missing edge to connect the start and end of S's perimeter. This ensures that the shape S is fully connected and maintains its original adjacency connections.
After constructing the perimeter of S and ensuring its full connectivity, we are left with a shape S that has one hole. To fill this hole and achieve a compact shape, we can apply Proposition <ref>. According to the proposition, the hole-filling process can be completed within at most log n time steps. We conclude that the total time complexity of constructing such a shape S with a constant number of turns in its perimeter is O(log |S|) + O(log n), which equals O(log n) time steps.
A family of shapes denoted by NICE was introduced by Almethen et al. <cit.>. A NICE shape consists of a horizontal line and various vertical lines that are perpendicular to the original horizontal line. This family of shapes can be constructed in logarithmic time steps using a growth operation from a single node.
All NICE shapes can be constructed in O(log n) time steps.
Assume a shape S_NICE∈ NICE of size n, that contains w.l.o.g a central horizontal line L_h with length 1≤ |L_h| ≤ n and a number of vertical lines L_1, L_2, …, L_v of total length 1 ≤ |L_v| < n that are orthogonal to L_h.
To construct S_NICE, we begin growing the horizontal line L_h using fast line growth, which starts from a single node and expands it in log|L_h| time steps. Next, we simultaneously grow all the vertical lines L_1, L_2, …, L_v in parallel. To achieve simultaneous growth of multiple vertical lines without collision, we can employ the BFS-variantas described earlier in this section. This ensures that the vertical lines do not collide with each other during the parallel growth.
Alternatively, if S_NICE has a vertical line with a length 1≤ |L_v| ≤ n, we can construct the vertical line first and then grow all the horizontal lines in parallel. It is important to note that since we are performing the cycle-preserving growth under the adjacency model, all the edges between the vertical segments will be added automatically during the growth operation without any form of collision.
The overall time complexity of constructing S_NICE using this method is determined by the time required to grow the longest line segment, which is at most log n time steps.
Any staircase shape S with a bounded number of steps can be constructed in O(log n) time steps.
A staircase is an alternating sequence of turning points and line segments connecting consecutive turning points. It can be uniquely defined by the coordinates of its turning points u_1, u_2, …, u_k. A bounded number of steps implies a bounded number of turning points. Thus, k is a constant. To construct such a staircase fast, we shall first construct the turning points sequentially and then grow in parallel the segments between them.
In the first phase, the k turning points are generated by a sequential —linear time step— process as follows. Starting from node u_1 which is the original singleton, u_i generates u_i+1 in time step i in the direction that respects their relative positions in S. This takes k-1 time steps to generate all turning points. The resulting staircase of turning points is equivalent to the one obtained by compressing all segments of S to unit length, which proves that this phase is collision free.
In the second phase, we grow —through a fast path growth process— all unit segments in parallel to their final length. Due to the geometry of the staircase, this phase is also collision free. It grows all segments within O(log (max_s{|s|})) time steps, where the maximum is over all segments s of the original staircase S. Thus, the whole process runs for (k-1)+O(log (max_s{|s|}))=O(log n) time steps.
§.§ Cycle-Preserving in the Connectivity Graph Model
This section defines the class of shapes that can be constructed by cycle-preserving growth in the connectivity graph model.
If a shape S can be grown in k time steps from a singleton u_0 in cycle-preserving growth, then S has a spanning tree T rooted at u_0, such that any root-to-leaf path of T has at most k turns.
Consider any shape S that can be grown in k time steps from a single node u_0, and let us assume that S has a spanning tree T_i which satisfies this until the time step i (i.e., T has at most k turns that equals i).
Let us assume w.l.o.g that at the next time step i+1, there is a horizontal cycle-preserving growth operation o_i. Then, the number of turns k in T_i can only be increased after operation o_i by at most one if one of the following cases occurs:
* If a line segment L_i in T_i is split into two line segments.
* If an additional turning point k+1 appears by extending the leaf of the line segment L_i.
For the first case, since o_i is a cycle-preserving growth, then, it cannot split any horizontal or vertical segment L_i due to its settings. As a result, we keep growing or translating the whole line segment L_i (i.e., the cycle-preserving growth will never increase the number of turns k in the tree T_i because it preserves all edges when growing any line segment) as shown in Figure <ref>.
For the second case, since o_i is a horizontal cycle-preserving growth, we can add one new turning point k+1 only
if the segment L_i, leading to the leaf, is vertical. However, if the segment is horizontal, applying o_i will only expand its length, not the number of turning points k.
Also, a new turning point k+1 is produced if a new horizontal root-to-leaf path is created by generating nodes to any vertical node of a vertical segment. These new leaves can increase the maximum number of turns in T_i by at most 1. Therefore, T_i+1 is an extension of T_i and has at most k+1 turns on every root to leaf path, which will consume at most k+1 time steps to generate such a shape S has a spanning tree T. Therefore, since cycle-preserving cannot increase the number of turns in the shape, then the statement holds.
If a shape S has a spanning tree T with O(log n) turns on every root-to-leaf path, then S can be constructed within O( log^2 n) time steps.
Consider a connected shape S,
by Proposition <ref> there is a spanning tree T of S with a constant k turns. To construct S, we can use breadth first search on every line segment in parallel.
We start from the root u_0 of the spanning tree T of S by using BFS line segments; we construct these line segments in parallel. Since each path on the tree T of S has at most k turns, it follows that there are k+1 line segments, and we can build all these segments within at most k+1 phases. In the worst case, each phase costs log n, which means at most O(klog n). However, in the worst case, if our tree T of S has at most log n turns, then, we consume log^2 n, thus, O( log^2 n) time steps to build such a shape S.
Any shape S with at most k turns can be compressed into a new shape S' with at most k turns and O(k^2) nodes on an O(k× k) grid.
Let S=(V, E) be a shape with O(k) turns, where V is the set of nodes and E is the set of edges. We will construct a compressed shape S'=(V', E') with at most k turns and k^2 nodes on an O(k × k) grid.
Since S has k turning points, then the number of rows and columns in S equals k, so we divide S into k horizontal rows and k vertical columns, forming a grid of size O(k × k).
Then, we identify the turning points in S and mark them as special nodes. Let V_k be the set of special nodes representing the turning points k in S.
For each row in the grid, we keep only nodes v∈ V_k and remove any duplicated nodes that are not connected to a node in V_k. The remaining nodes (i.e., that are connected to a node in V_k) in each row are stored in the set V_r. Similarly, we do the same for each column and keep the remaining nodes in the set V_c.
Let V' be the union of V_k, V_r, and V_c. The set of nodes in the compressed shape S' is V'. Then, we construct the set of edges E' in S' by including all edges in E that connect nodes in V'.
By construction, the compressed shape S' has at most k turns because we only consider the special nodes representing the turning points in S. The size of S' is at most k^2 since the grid has size O(k × k). Therefore, we have proved that any shape S with at most k turns can be compressed into a new shape S' with at most k turns.
To build a spiral shape S with a total number of k turns where k= log n by using BFS, we need O(log n loglog n) time steps.
From the above Lemma <ref>, any spiral with k turns can be compressed into a k = O(log n) sequence of segments, each of length m, which is at most log n. Then, using the breadth-first search to construct the compressed spiral, we first build the k turning point, point by point sequentially, then expand them into their final length in parallel. Therefore, O(k log m) time steps are needed, giving a total of O(log n loglog n) time steps.
A pipelined breadth-first search is a modified version of BFS that can be used to construct spiral shapes in the cycle-preserving growth process efficiently (i.e., within logarithmic time steps). It consists of two main phases:
Step-
* Constructing and waiting phase: During this phase, we build at most four turning points in an order that follows the geometry of a shape S.
* Growing phase: In this phase, the partially constructed structure from the constructing and waiting phase grows in parallel apart from those that reached their final length.
If a spiral shape S with a total number of k turns where k= log n, then such S can be constructed within logarithmic time steps by using pipelined BFS.
Consider a spiral shape S=(V, E) consisting of a set of layers, where each layer consists of two r∈ R rows and two c∈ C columns on the grid. Each layer S_i contains at most four turning points k, which can be defined as the moving point of each row r into column c. We will use the pipelined BFS approach to construct such a shape within a logarithmic number of time steps.
After compressing S by using Lemma <ref>, we achieved S', which contains all k turning points and possibly some in-compressible nodes that connect these turning points.
Then we start build S' as follow:
In phase i=1, we start from a root u_0 and generate the compressed version of the first (external) layer (i.e., its turning points) S_1={v_1,v_2,v_3, v_4} of S node by node.
Then, in phase i=2, we grow every node in parallel in its position and expand every segment of this layer using the cycle-preserving growth.
Following that, we start again generating the next (inner) layer, S_i+1 (i.e., the compressed version of the next spiral layer). We continue growing these two layers in parallel until the final layer S_log n fits in them.
Finally, because S has a total of log n layers, we consume log n waiting time for each layer to construct. Therefore, the total number of time steps to construct the whole shape is log n + log n for growing all segments in parallel until they reach their final length.
Let P be a path with k turning points. Let A be an algorithm that generates P from a singleton. Without loss of generality, we can assume that A starts from a turning point of the path P. We now give a few observations and lemmas concerning some properties of A. Recall that an edge, once generated, cannot be deleted in the cycle-preserving model. This immediately implies the following observation.
A node can grow in at most its degree many different directions. Moreover, once a node has degree many neighbors in the path constructed by A, it can only grow along one of its incident edges in the path.
As there exists a unique subpath between any two vertices in a path, this fact, together with the above observation gives the following observation.
Let x and z be any two vertices of P such that there exists a straight subpath between them in the path constructed so far by A. Then, all the vertices on the subpath between x and z in P will lie on a straight subpath in the final path constructed by A.
We now give the following lemma concerning the order in which the turning points of P are generated by A.
Let P be a path between u and v with k turning points. Let ⟨ tp_1, tp_2, …, tp_k ⟩ be the order of turning points of P from u to v. Let A be any algorithm that generates P from a singleton starting from the turning point tp_i. Then, the sets {tp_i+1, tp_i+2, …, tp_k} and {tp_1, tp_2, …, tp_i-1} of turning points are generated in the order ⟨ tp_i+1, tp_i+2, …, tp_k ⟩ and ⟨ tp_i-1, tp_i-2, …, tp_1 ⟩, respectively by A. Moreover, A respects the direction of P at every node while generating the next node from it.
Recall that, an edge, once generated, can not be deleted in the cycle-preserving model. This in turn means that a node can grow in at most its degree many different directions. Moreover, once a node has degree many neighbors, it can only grow along an incident edge.
We first prove that A respects the direction of P at every node while generating the next node from it. As the direction makes sense only when the node already has a neighbor, we prove the statement for the nodes which grow at time step 2 or later. Let A grows the node v at time step t ≥ 2. Assume for contradiction, A does not respect the direction of P at v while generating the next node u. Then, once u is generated, the degree of v is 2 in the path constructed by A so far. By Observation <ref>, we get that A can never create a neighbor of v in the desired direction, a contradiction. Thus, A always respects the direction of P at every node while generating the next node from it.
We now prove the property regarding the order of generation of turning points. We prove that the set {tp_i+1, tp_i+2, …, tp_k} is generated in the order ⟨ tp_i+1, tp_i+2, …, tp_k ⟩. The proof for the set {tp_1, tp_2, …, tp_i-1} is similar. Let tp_j+1 be the first turning point that was not generated in the desired order, for j ≥ i. Moreover, let t_k be the turning point that was generated after t_j, for k > j. This implies that there exists a subpath P' from t_j to t_k of the path constructed so far by A which does not contain any other turning points, i.e. P' is drawn as a straight line. As t_j+1 lies between t_j and t_k in P, by Observation <ref>, we get that A can never create the two neighbors of t_j+1 in different directions. This contradicts the fact that t_j+1 is a turning point of P. Thus, we conclude that the set {tp_i+1, tp_i+2, …, tp_k} is generated in the order ⟨ tp_i+1, tp_i+2, …, tp_k ⟩.
Let P be an incompressible spiral path between u and v with k turning points (see Figure <ref>). Moreover, let u be the internal endpoint of P. We now give the following lemma about the lower bound on the number of steps taken by any algorithm that generates P from a single node starting from u.
Let P be an incompressible spiral path between u and v with k turning points. Moreover, let u be the internal endpoint of P. Let A be any algorithm that generates P from a singleton starting from u. Then, A requires Ω(klog k) time steps.
Let ⟨ tp_1 = u, tp_2, …, tp_k = v ⟩ be the order of turning points of P from u to v. By Lemma <ref>, we know that A generates the turning points in the order ⟨ tp_1 = u, tp_2, …, tp_k = v ⟩. Let GT_j be the time step when the turning point tp_j was generated by A, for any j ≥ 2. Let P(t) be the path constructed by A after time step t. Further, let a and b be two vertices of P. We denote by P[a,b] the path between a and b (including both a and b) of P. Moreover, we denote by |a-b|_P the number of edges in P[a,b]. Also, we denote by X(a,P) the x-coordinate of the vertex a in P. To prove the above lemma, we first prove the following lemma about the path constructed by A.
For any j ≥ 5, the path P(GT_j-1) generated by A till time step GT_j - 1 should be the same as the subpath P[tp_1, tp_j-1] of P between tp_1=u and tp_j-1.
We prove the statement by induction on j.
Base case (j=5). Recall that, as P is incompressible, |tp_2 - tp_1|_P = |tp_3 - tp_2|_P = 1. Thus GT_3 = 2 and P(GT_3) = P[tp_1, tp_3]. Assume for contradiction that the lemma is not true for j=5. This means that P(GT_5 - 1) is a subpath of P[tp_1,tp_4]. By Lemma <ref>, we know that GT_5 > GT_4 > GT_3. As P(GT_3) = P[tp_1, tp_3], we get that P[tp_1, tp_3] is a subpath of P(GT_5 - 1). Combining this fact with the fact that P(GT_5 - 1) is a subpath of P[tp_1,tp_4], we get that 1 ≤ |tp_4 - tp_3|_P(GT_5 - 1) < |tp_4 - tp_3|_P = 2. This implies that |tp_4 - tp_3|_P(GT_5-1) = 1. This further mean that X(tp_4, P(GT_5 - 1)) = X(tp_1, P(GT_5 - 1)). By Lemma <ref>, we get that A respects the direction of P at every node. Therefore, when tp_5 is generated it will collide with tp_1, a contradiction (e.g., see Figure <ref>). So, the lemma is true for j = 5.
Inductive hypothesis. Suppose that the lemma is true for j = t - 1 ≥ 5.
Inductive step. We need to prove that the lemma is true for j = t ≥ 6. Assume for contradiction that the lemma is not true for j. This means that P(GT_t - 1) is a subpath of P[tp_1,tp_t-1]. By Lemma <ref>, we know that GT_t > GT_t-1 > GT_t-2. By the inductive hypothesis, we know that P(GT_t-1 - 1) = P[tp_1, tp_t-2]. This implies that P[tp_1, tp_t-2] is a subpath of P(GT_t - 1). Combining this fact and the fact that P(GT_t - 1) is a subpath of P[tp_1,tp_t-1], we get that 1 ≤ |tp_t-1 - tp_t-2|_P(GT_t - 1) < |tp_t-1 - tp_t-2|_P = ⌊t-1/2⌋. This further implies that either t=6 and X(tp_5, P(GT_6 - 1)) = X(tp_1, P(GT_6 - 1)), or X(tp_t-5, P(GT_t - 1)) ≤ X(tp_t-1, P(GT_t - 1)) < X(tp_t-6, P(GT_t - 1)). By Lemma <ref>, we get that A respects the direction of P at every node. Therefore, when tp_t is generated, it will collide with a node on the subpath of P(GT_t-1 - 1) between tp_t-5 and tp_t-6, a contradiction (e.g., see Figure <ref>). So, the lemma is true for j=t.
We now give the proof of Lemma <ref> using Lemma <ref>.
Let ST_j be the time taken by A to create the path P[tp_1, tp_j] starting from tp_1, for any j ≥ 2. Then, by Lemma <ref>, we get that GT_j ≥ ST_j-1 + 1, for any j ≥ 5. Moreover, by Lemma <ref>, we know when tp_j is generated, the subpath from tp_1 to tp_j-1 is already generated by A. So, the difference between P(GT_j) and P[tp_1, tp_j] is the length of the subpath between tp_j-1 and tp_j in both the paths. As we know the subpath between tp_j-1 and tp_j is a straight line path in P, we can generate it in log(|tp_j - tp_j-1|_P). This implies that, ST_j = GT_j + log(|tp_j - tp_j-1|_P). Combining the two equations, we get that ST_j ≥ ST_j-1 + 1 + log(|tp_j - tp_j-1|_P). It is easy to observe that ST_4 = 4. Thus, by solving the recursive relation, we get that ST_j = Ω(klog k). This proves the lemma.
We now give the main theorem of this section.
Let A be an algorithm that generates a path from a singleton. Then, there exists a path for which A takes Ω(klog k) time steps.
We prove the theorem by giving a path on which any algorithm that generates a path from a singleton takes Ω(klog k) time steps. We construct an incompressible path P consisting of two spirals as shown in Figure <ref>. It is easy to observe that, due to Lemma <ref>, irrespective of the starting node A will generate one of the red or blue spirals from its internal endpoint u. Then, by a similar proof to that of Lemma <ref>, we can prove that A takes Ω(klog k) time steps.
§ CYCLE-BREAKING GROWTH PROCESSES
This growth process is characterized by its ability to break any edges within a shape while maintaining its global connectivity. It enriches the class of shapes that can be constructed in this growth process by breaking connections and transforming neighboring nodes using neighbor handover. The following proposition demonstrates growing any spanning tree of a rectangular shape in logarithmic time steps, where |S_I|=1.
For any rectangular shape S with all adjacencies, we can construct any spanning tree of S within O(log n) time steps.
In the first phase, we use the fast rectangle growth process, defined in Section <ref> to construct a rectangle shape S of size n. This operation starts from a singleton and doubles the shape until it reaches the desired size n. This consumes at most log n time steps.
In the second phase, once the rectangular shape S is constructed, we break the edges in parallel to form the final spanning tree in a constant time step c.
Therefore, the total time to construct such a shape is at most log n + c, thus, O(log n) time steps.
Any staircase shape S can be grown within logarithmic time.
Consider a staircase shape S with dimensions l× k.
First, we choose one dimension of the staircase (either length l or height k). For simplicity, let us assume we start from a single node u and grow the length of the shape S (i.e., dimension l) until it reaches the desired length of S. This can be done using the full doubling operation as described in <cit.>.
After that, we identify the starting node of each step in S and perform the breaking operation (see Definition <ref>). Then, we simultaneously grow all of these nodes vertically in parallel until we achieved the actual height k of each step in the staircase S. This involves splitting the whole segment from the first step into multiple smaller segments (i.e., each representing a step of the staircase S). The specific splitting pattern can be determined based on the desired configuration of the staircase S.
By following this approach, we can construct first one dimension of the shape S, in other words, a line segment of length l, in logarithmic time. Then, we efficiently add the remaining steps and grow them vertically to reach the desired height k in parallel. As a result, the overall time complexity of this is logarithmic to the dimensions of the staircase.
A family of shapes known as orthogonally convex shapes, as defined in Proposition 1 by Connor and Michail <cit.>, is a set of shapes where the perimeter consists of four staircases, and the interior is completely filled with nodes. It is possible to generate any shape in this family in logarithmic time steps by following these steps:
Step-
* Consider any orthogonally convex shape S, where the exterior of S consists of four staircases WN,NE,ES and SW, and the interior of S is fully filled with nodes.
* Start from any two consecutive quadrants of the shape's perimeter, such as (WN, NE),
(NE,ES),(ES,SW) or (SW,WN).
* Using Lemma <ref>, grow two consecutive quadrants (i.e., two consecutive staircases) of S in their final geometry. It is important to note that each quadrant of S's perimeter is constructed in accordance with its final position, ensuring that there will be no collisions between adjacent quadrants.
* Since the orthogonally convex shape S is fully filled with nodes, we can proceed to double the nodes in the generated subpart from Step-3. This doubling process is performed in lines until the entire shape S is obtained.
Given any orthogonally convex shape S, the above algorithm can grow S from a singleton within O(log n) time steps.
In order to construct any shape S that belongs to the orthogonally convex family, we perform the proposed procedure above.
By starting from a single node u we can use Lemma <ref> and grow any two consecutive quadrants of S's perimeter according to their final positions in S. Assume w.l.o.g. that the two consecutive staircases are WN and NE, this will consume logarithmic time steps of the length of WN plus logarithmic time steps of the length of NE.
Since the orthogonally convex shape S is fully filled with nodes, we perform the final step and every node of the constructed part of WN and NE of S doubles in lines to form the final shape S. This step also takes logarithmic time steps of the longest line of the other part, ES and SW. Therefore, the construction of any shape S in the orthogonally convex family can be completed within at most log n time steps, where n is the total number of nodes in S.
Below is an informal description of an algorithm that provides an O(log n) time steps growth schedule for any connected shape S. The algorithm achieves this by determining an elimination order of the nodes and generating a growth schedule by reversing this order.
The algorithm consists of two sets of phases: vertical phases and horizontal phases. Given a shape S with i rows and j columns do the following:
* Let L=l_1,l_2,…,l_i-1 be the set of vertical phases.
* For each phase l_i ∈ L, where i ranges from 1 to i-1, do the following:
2.
* Count rows from the bottom-most row, starting with i=1, and denote the odd row as 2i-1 and the even row as 2i.
* For every node u in an odd row 2i-1 that has a neighbor v in an even row 2i, eliminate v by contracting the edge uv towards u. Then, register the eliminated or translated nodes (i.e., if there is no neighbour, a node moves down one row) in a set σ to maintain their order.
* At the end of phase l_i, add all edges between nodes and move on to the next vertical phase l_i+1, counting rows from the bottom-most row, and repeating the same process.
* After completing the set of vertical phases, a horizontal line is obtained with a length equal to the horizontal dimension of the original shape (i.e., the number of columns in S).
* Apply the horizontal set of phases and repeat steps (1-3), which results in eliminating the horizontal line by successive halving.
* After completing both the vertical and horizontal sets of phases, reverse the constructed schedule σ into σ^', and return the growth schedule σ^'.
Given any connected shape S with dimensions l × k, the above algorithm can construct S from a singleton within O(log l + log k) time steps.
After executing the algorithm on the connected shape S and obtaining the growth schedule σ', the growth process involves adding nodes and edges based on the reversed order of elimination or translation represented by σ'.
By applying the schedule σ', starting from a single node, we expand the shape horizontally into j-1 columns using the doubling operation according to the schedule. After completing the horizontal growth, we proceed to the vertical growth, doubling the constructed row vertically into i-1 rows.
To analyze the time complexity of growing the shape S using this approach, let n be the total number of nodes in S, which is equal to the product of the dimension l × k. In each time step, we perform the growth operation, first horizontally and then vertically. The dimension of the horizontal growth is bounded by the number of columns l, which takes O(log l) time steps. Similarly, the vertical growth is bounded by the number of rows k, which takes O(log k) time steps. Therefore, the overall time complexity for this process is O(log k + log l).
§.§ Growth-Distance to Trees
The primary feature of the cycle-breaking growth is that it increases the distance in S by introducing new nodes and breaking certain edges. As a result, any connected shape S can be stretched and converted into a spanning tree T. Converting a shape S into a spanning tree T consists of the following steps:
Step-
* Consider a given spanning tree T of shape S=(V, E).
* At the first time step t_1, apply a cycle-breaking growth on every horizontal edge e∈ E of S that is parallel to non-tree edges of T (i.e., the decision to break such edges depends on the computed spanning tree T in Step-1).
* At the second time step t_2, apply a cycle-breaking growth on every vertical edge e∈ E of S that is parallel to non-tree edges of T. In other words, repeat Step-2 but vertically.
Algorithm <ref> transforms any shape S into a tree T within two-time steps.
To formally prove that we can convert any shape S into a tree T within two time steps, we need to demonstrate two main properties of the output tree T: connectivity and acyclicity.
For connectivity, the given shape S is initially assumed to be connected. The computation of the spanning tree T ensures that T spans all the nodes in S, meaning there is a single path between any pair of nodes in T.
Without loss of generality, let us assume that the horizontal uv edge is not part of the spanning tree T; we break it by introducing a new node x between u'v' that is parallel to uv. This ensures that any path between u and v in S is now connected through node x and the newly introduced edge u'v', that is, the new path between uv is uu', u'x, xv', v'v. Hence, after applying the cycle-breaking growth in parallel, the resulting tree T remains connected, fulfilling the connectivity property.
To prove the acyclicity of T, we consider that the computation of the spanning tree T ensures that T is a tree structure, which by definition, does not contain cycles. After that, during the cycle-breaking growth, no new cycles are introduced. Breaking an edge and growing a parallel edge does not create a cycle, as the newly introduced edges only connect existing nodes in T.
The computational complexity of Algorithm <ref> can be analyzed as follows. In the first step, a cycle-breaking growth is concurrently applied to each horizontal edge in S that is not part of the non-tree edges of T. Subsequently, in the second step, a cycle-breaking growth is performed on every vertical edge in S. Thus, Algorithm <ref> transforms any given shape S into a tree T within two time steps.
§ CONCLUSION
In conclusion, this paper has investigated the geometric properties of cycle-preserving and cycle-breaking growth processes within a centralized geometric framework. We have explored several key questions, including the class of shapes that can be constructed through these growth operations, their differences, and the possibility of transforming shapes from one family to another. As a result, we characterized some classes of shapes that can be constructed within logarithmic time steps using these growth operations. Also, we presented efficient algorithms and approaches for achieving the desired shape construction or transformation.
The results of this study open up new avenues for research and applications in the field of shape manipulation and provide valuable insights into the possibilities and limitations of growth operations. Despite the significant progress made, several open problems are worth further investigation. One open problem is the decision problem of determining whether a growth process exists that can transform an initial shape S_I to a final shape S_F within a given time-bound t. This problem has implications for reachability and can be further studied in special cases such as single-step reachability and the singleton special case of S_I. Additionally, extend the decision problem into the function problem where the objective is to return a growth schedule that transforms S_I into S_F within t time steps. Furthermore, an optimization problem arises in the context of shape growth. In this problem, given an initial shape S_I, a target shape S_F, and a time-bound t, the goal is to find the fastest growth process that transforms S_I into S_F within t time steps. The objective is to minimize the time steps required for the transformation, providing an optimal solution that achieves the desired shape in the shortest time possible.
Addressing these open problems will contribute to the development of efficient algorithms and techniques for geometric shape growth.
|
http://arxiv.org/abs/2307.04196v1 | 20230709150526 | Trans-Planckian Effect in $f(R)$ Cosmology | [
"S. Cheraghchi",
"F. Shojai",
"M. H. Abbassi"
] | gr-qc | [
"gr-qc"
] | |
http://arxiv.org/abs/2307.05879v1 | 20230712025215 | Effects of quantum fluctuations of the metric on a braneworld | [
"F. C. E. Lima",
"C. A. S. Almeida"
] | gr-qc | [
"gr-qc",
"hep-th"
] |
decorations.pathmorphing
plainnat
ϕ
μ
øω
π
θ
ρ̊
σ
τ
ῠ
ξ
ζ
∇
Ψ
ŁΛ
ØΩ
Π
Θ
Σ
Υ
Ξ
|
http://arxiv.org/abs/2307.04927v2 | 20230710222833 | Probabilistic Counterexample Guidance for Safer Reinforcement Learning (Extended Version) | [
"Xiaotong Ji",
"Antonio Filieri"
] | cs.LG | [
"cs.LG",
"cs.LO"
] |
Probabilistic Counterexample Guidance for Safer RL
Ji and Filieri
Department of Computing
Imperial College London
London, SW7 2AZ, UK
{xiaotong.ji16, a.filieri}@imperial.ac.uk
Probabilistic Counterexample Guidance for Safer Reinforcement Learning (Extended Version)
Xiaotong Ji Antonio Filieri
=========================================================================================
Safe exploration aims at addressing the limitations of Reinforcement Learning (RL) in safety-critical scenarios, where failures during trial-and-error learning may incur high costs. Several methods exist to incorporate external knowledge or to use proximal sensor data to limit the exploration of unsafe states. However, reducing exploration risks in unknown environments, where an agent must discover safety threats during exploration, remains challenging.
In this paper, we target the problem of safe exploration by guiding the training with counterexamples of the safety requirement. Our method abstracts both continuous and discrete state-space systems into compact abstract models representing the safety-relevant knowledge acquired by the agent during exploration. We then exploit probabilistic counterexample generation to construct minimal simulation submodels eliciting safety requirement violations, where the agent can efficiently train offline to refine its policy towards minimising the risk of safety violations during the subsequent online exploration.
We demonstrate our method’s effectiveness in reducing safety violations during online exploration in preliminary experiments by an average of 40.3% compared with QL and DQN standard algorithms and 29.1% compared with previous related work, while achieving comparable cumulative rewards with respect to unrestricted exploration and alternative approaches.
14cm
< g r a p h i c s >
0
§ INTRODUCTION
A critical limitation of applying Reinforcement Learning (RL) in real-world control systems is its lack of guarantees of avoiding unsafe behaviours. At its core, RL is a trial-and-error process, where the learning agent explores the decision space and receives rewards for the outcome of its decisions. However, in safety-critical scenarios, failing trials may result in high costs or unsafe situations and should be avoided as much as possible.
Several learning methods try to incorporate the advantages of model-driven and data-driven methods to encourage safety during learning <cit.>. One natural approach for encouraging safer learning is to analyse the kinematic model of the learning system with specific safety requirements and to design safe exploration <cit.> or safe optimisation <cit.> strategies that avoid unsafe states or minimise the expected occurrence of unsafe events during training. However, this approach is not applicable for most control systems with partially-known or unknown dynamics, where not enough information is available to characterise unsafe states or events a priori.
To increase the safety of learning in environments with (entirely or partially) unknown dynamics, we propose an online-offline learning scheme where online execution traces collected during exploration are used to construct an abstract representation of the visited state-action space. If during exploration the agent violates a safety requirement with unacceptable frequency, a probabilistic model checker is used to produce from the abstract representation minimal counterexample sub-models, i.e., a minimal subset of the abstract state-action space within which the agent is expected to violate its safety requirement with a probability larger than tolerable. These counterexamples are then used to synthesise small-size offline simulation environments within which the agent's policy can be conveniently reinforced to reduce the probability of reiterating safety violating behaviors during subsequent online exploration. As new evidence from online exploration is gathered the abstract representation is incrementally updated and additional offline phases can be enforced when necessary, until an acceptable safety exploration rate is achieved. Overall, our strategy aims at migrating most trial-and-error risks to the offline training phases, while discouraging the repeated exploration of risky behaviours during online learning. As new evidence is collected during online exploration, the abstract representation is incrementally updated and the current value the agent expect from each action is used to prioritise the synthesis of more relevant counterexample-guided simulations.
Our main conceptual contribution in this paper is the use of probabilistic counterexamples to automatically synthesise small-scale simulation submodels where the agent can refine its policy to reduce the risk of violating a safety requirement during learning.
In particular, we 1) propose a conservative geometric abstraction model representing safety-relevant experience collected by the agent at any time during online exploration, with theoretical convergence and accuracy guarantees, suitable for the representation of both discrete and continuous state spaces and finite action spaces, 2) adapt minimal label set probabilistic counterexample generation <cit.> to generate small-scale submodels for the synthesis of offline agent training environments aimed at reducing the likelihood of violating safety requirements during online exploration, and 3) a preliminary evaluation of our method to enhance Q-Learning <cit.> and DQN <cit.> agents on problems from literature and the OpenAI Gym, demonstrating how it achieves comparable cumulative rewards while increasing the exploration safety rate by an average of 40.3% compared with QL/DQN, and of 29.1% compared with previous related work <cit.>.
§ BACKGROUND
§.§ Problem Framework
Markov Decision Process (MDP).
An MDP <cit.> is a tuple (S, A, s_0, P, R, L), where S is a set of states, A is a finite set of actions, s_0 is the initial state, P: S × A × S → [0, 1] is the probability of transitioning from a state s ∈ S to s'∈ S with action a ∈ A, R: S × A →ℝ is a reward function and L: S → 2^AP is a labelling function that assigns atomic propositions (AP) to each state.
A state in S is typically represented by a vector of finite length n_S ≥ 1. A state space is discrete if the elements of the vector are countable, where we assume S ⊆𝒵^n_S), continuous if S ⊆ℝ^n_S, or hybrid if some elements are discrete and others are continuous. When possible, we omit the cardinality n_S for readability.
Trace.
A finite trace (also called path or trajectory) through an MDP is a sequence σ = s_0, a_0, s_1, a_1, … s_i, a_i, … s_n, where s_0 is the initial state and P(s_i, a_i, s_i+1) > 0.
Policy.
A (deterministic) policy π: S → A selects in every state s the action a to be taken by the agent.
Q-learning (QL) <cit.> is a reinforcement learning algorithm where an agent aims at finding an optimal policy π^* for an MDP that maximises the expected cumulative reward. Given a learning rate α∈ (0, 1] and a discount factor γ∈ (0, 1], such that rewards received after n transitions are discounted by the factor γ^n, the agent learns a value function Q based on the following update rule:
Q_t(s, a) = (1 - α)Q_t-1(s, a) + α(R(s, a) + γmax_a' ∈ A Q_t-1(s', a'))
The optimal Q-function Q^* satisfies the Bellman optimality equation:
Q^*(s, a) = [R(s, a) + γmax_a' ∈ A Q^*(s', a') | s, a]
For finite-state and finite-action spaces, QL converges to an optimal policy as long as every state action pair is visited infinitely often <cit.>, but it is not suitable for learning in continuous state spaces. For continuous state spaces instead, the Deep Q-Learning method <cit.> parameterises Q-values with weights θ as a Q-network and the learning process is adapted to minimising a sequence of loss function L_i at each iteration i (cf. Algorithm 1 in <cit.>):
L_i(θ_i) = [(y_i - Q(s, a ; θ_i))^2],
where y_i = [(R(s, a) + γmax_a' ∈ A Q(s', a';θ_i-1) | s, a].
During learning, the agent selects the next action among those available in the current state at random with probability ϵ_QL >0 while with probability 1 - ϵ_QL it will select an action a yielding Q^*(s,a).
Optimal Policy.
An optimal policy π^*, in the context of Q learning and DQN, is given by π^* = max_a ∈ A Q^*(s,a), ∀ s ∈ S.
§.§ Probabilistic Model Checking and Counterexamples
Probabilistic model checking is an automated verification method that, given a stochastic model – an MDP in our case – and a property expressed in a suitable probabilistic temporal logic, can verify whether the model complies with the property or not <cit.>.
In this work, we use Probabilistic Computational Temporal Logic (PCTL) <cit.> to specify probabilistic requirements for the safety of the agent. The syntax of PCTL is recursively defined as:
Φ := true |α|Φ∧Φ|Φ| P_⋈ pφ
φ := X Φ|Φ U Φ
A PCTL property is defined by a state formula Φ, whose satisfaction can be determined in each state of the model. true is a tautology satisfied in every state, α∈ AP is satisfied in any state whose labels include (α∈ L(s)), and ∧ and are the Boolean conjunction and negation operators. The modal operator P_⋈ pφ, with ⋈∈{<, ≤, ≥, >} and p ∈ [0,1], holds in a state s if the cumulative probability of all the paths originating in s and satisfying the path formula φ is ⋈ p under any possible policy. The Next operator X Φ is satisfied by any path originating in s such that the next state satisfies Φ. The Until operator Φ_1 U Φ_2 is satisfied by any path originating in s such that a state s^' satisfying Φ_2 is eventually encountered along the path, and all the states between s and s^' (if any) satisfy Φ_1. The formula true U Φ is commonly abbreviated as F Φ and satisfied by any path that eventually reaches a state satisfying Φ. A model M satisfies a PCTL property Φ if Φ holds in the initial state s_0 of M <cit.>.
PCTL allows specifying a variety of safety requirements. For simplicity, in this work, we focus on safety requirements specified as upper-bounds on the probability of eventually reaching a state labelled as :
Safety Requirement. Given a threshold λ∈ (0,1], the safety requirement for a learning agent is formalised by the PCTL property P_≤λ [F ], i.e., the maximum probability of reaching a s ∈ S such that ∈ L(s) must be less than or equal to λ.
Counterexamples in Probabilistic Model Checking.
A
counterexample is a minimal possible sub-model M_cex = (S_cex, A_cex, s_0, P_cex) derived from the model M, where S_cex, A_cex are of subsets S and A, containing violating behaviours of a PCTL property from the initial state s_0 in M.
When a model M does not satisfy a PCTL property, a counterexample can be computed as evidence of the violation <cit.>. In this work, we adapt the minimal critical label set counterexample generation method of <cit.>.
The computation of a minimal possible sub-model requires the solution of a mixed-integer linear optimisation problem that selects the smallest number of transitions from the state-action space original model that allows the construction of violations.
An extensive description of the counterexample generation algorithm, including a heuristic to bias the counterexample generation towards including actions that a tabular Q-learning agent is more likely to select is included in Appendix <ref>.
Generating multiple Counterexamples. For an MDP violating the safety requirement, there can exist, in general, multiple counterexamples (both with minimal or non-minimal sizes), each potentially highlighting different policies that lead to requirement violations <cit.>.
In this work, we use counterexamples to guide the generation of offline training environments where the agent learns to reduce the value of actions that may eventually lead to the violation of safety requirements. We therefore aim at generating multiple, diverse counterexamples (if they exist), while keeping each of them at a small size for faster training. Given a counterexample, a different one can be obtained by adding a blocking clause to the minimisation problem, i.e., forcing the optimiser to exclude one or more previously selected action pairs (by imposing the corresponding selector variables x_ℓ=0 in the optimisation problem of Appendix <ref>). Hence, we can systematically add (an increasing number of) blocking clauses to obtain multiple diverse counterexamples that jointly provide a more comprehensive representation of the different violating behaviors the agent explored at any time.
§ COUNTEREXAMPLE-GUIDED REINFORCEMENT LEARNING
We assume that the agent does not have prior knowledge about the environment. In particular, it will only discover unsafe states upon visiting them during exploration. During the learning process the agent will iteratively interact with either the actual environment (online exploration) or with a counterexample-guided offline simulation.
The online phases aim at exploring the actual environment and improving the agent's policy expected reward, while acquiring information to build and continuously refine an abstract, compact representation of the control problem. The offline phases expose the agent to simulated, small-size, environments, within which the agent can revise its policy to penalise decisions that may lead to safety violations during the subsequent online phases.
In the remaining of the section, we first show how to construct and update an abstract finite MDP that compactly represents safety-relevant aspects of the (parts of) environment explored by the agent at any time (sec. <ref>).
Then, in sec. <ref>, we introduce the main learning algorithm with the online-offline alternation scheme, and discuss the main challenges of the offline learning phases.
§.§ Safety-relevant State-space Abstraction
For simplicity, let us assume the agent has no information about the topology of the state space S at the beginning of the exploration. Each online interaction with the environment (episode) can be described by the trace of the states visited by the agent and the actions it took. Besides the reward associated with each state-action pair, we assume states in the trace can be assigned a set of labels. Labels represent properties of the state related specific to the learning problem, e.g., a goal has been reached. We assume a special label labels the occurrence of unsafe situations the agent should aim to avoid. W.l.o.g., we assume an episode terminates when the agent enters an state.
We will refer to the states of the online environment as concrete states.
In this section, we propose an abstraction procedure to construct a finite, abstract MDP that retains sufficient information about the explored concrete environment to enable the synthesis of abstract counterexamples to the safety requirement. Each counterexample will therefore be an abstract representative of a set of possible safety violating behaviors that can happen in the concrete environment. To maintain the size of the abstract MDP tractable – especially in the presence of continuous concrete state spaces – the abstraction will retain only (approximate) safety-relevant information.
We assume that any state not labeled as is to explore and that the label is time-invariant. Furthermore, the abstraction must preserve at any time a safety invariant: every explored unsafe concrete state should be mapped to an unsafe abstract state. The finite abstract state space must therefore separate safe and unsafe regions of the concrete space, with only safe concrete states possibly misclassified as unsafe but not vice versa.
Inspired by the idea of casting the learning of geometric concepts as a set cover problem in <cit.>, we frame the separation task as a minimal red-blue set cover problem to abstract the explored concrete state space as a finite set of disjoint boxes or polyhedra, each expressed as a set of logical constraints and defined as the intersection of a finite set of hyperplanes.
To formalise the construction of the abstract state-space, we first introduce the notion of coverage of a concrete state by a polyhedra predicate.
Coverage of a polyhedra predicate
Let S̅⊆ S = {_0, _1, …, _n} be the set of all explored concrete states, a particular state ∈S̅ is covered by a (polyhedra) predicate C_i if ∈ C_i, where C_i = { | ω + ≤ 0}, in which ω represents the vector of slopes and represents the vector of biases corresponding to half-spaces enclosing the predicate.
The general affine form of the predicate C_i accounts for a variety of common numerical abstract domains, including, e.g., boxes (intervals), octagons, zonotopes, or polyhedra. In this work, we fix ω=1, i.e., restrict to hyper-boxes. We allow the user to specify a minimum size d > 0, which ensures that no dimension of the box will be reduced to a length smaller than d. This coarsening of the abstract domain struck a convenient trade-off between computational cost and accuracy of the abstraction in our preliminary experiments (see Appendix <ref> for additional discussion); the restriction can be lifted for applications requiring different trade-offs <cit.>.
The identification of a finite set of predicates that allow separating the concrete state space preserving the safety invariant can thus be reduced to the following:
Minimal Red-Blue Set Cover Problem.
Let S̅⊆ S = {_0, _1, … , _n} be the set of all explored concrete states and U = {_0, _1, …, _m} be the set of explored states assigned the label, find the minimal set C = {∪_i=1 C_i} s.t. every element ∈ U is covered by some predicate C_i, with an overall false positive rate fpr≤ f ∈ (0,1] for safe concrete states (∈S̅∖ U) covered by C.
In general, f cannot be zero, since the concrete state space, whether discrete or continuous, may not be perfectly partitioned by a finite set of polyhedra predicates, with smaller values of f possibly resulting in a larger number of predicates |C|.
To solve this optimisation problem, we employ a branch and bound method <cit.> to systematically enumerate possible combinations of predicates. The solution set guarantees all the unsafe concrete states are covered by a Boolean combination of predicates in C, while safe concrete states may also be covered by some predicate C_i, with a prescribed maximum tolerable rate f.
Safety-relevant Abstraction MDP.
A safety-relevant abstraction MDP M_a is a tuple (S_a, A_a, s_a0, R_a, P_a, L_a), where S_a is the abstract state space, which is the partition of the concrete state space S induced by the boundaries of C_i∈ C from the solution of the minimal set cover above, A_a is the set of applicable actions, s_a0 is the initial abstract state, P_a: S_a × A_a × S_a → [0,1] is a probability transition function, R_a is the abstract reward function (which will be defined later), and L_a: S →{, } is a labelling function.
M_a is constructed from the concrete traces collected during online learning, with the satisfaction of the predicates C_i determining the abstraction of concrete states and the abstract transition function is estimated accordingly from the frequencies observed in the concrete traces. The abstraction must preserve the safety invariant, therefore it may overapproximate explored unsafe regions, but not underapproximate them. Initially, the entire state space is assumed safe to explore. As new traces are collected during online exploration, the abstract model is incrementally updated: when a concrete state is found to be unsafe, the hyperbox containing its numerical vector representation is split to correct the classification (after the concrete state is wrapped around with a hyperbox of minimal size d, if d>0).
The incremental branch-and-bound refinement of the abstraction could lead to excessive fragmentation of the abstract state space, making it intractably large, particularly for the purpose of counterexample generation. To mitigate this issue, we merge adjacent states, i.e., abstract states sharing at least one separating hyperplane, into a single abstract state, adapting the general notion of probabilistic approximate ϵ-simulation <cit.> as in the following definition:
Adjacent ϵ-simulation:
Let S_l be the partitions of S_a induced by the equivalence relation s ∼ s' iff L_a(s)=L_a(s'). Then, for a given ϵ∈ [0,1], two adjacent states s ∈ S_a and s' ∈ S_a are ϵ-similar if (∃ s_l ∈ S_l) (s ∈ s_l, s' ∈ s_l) and (∀ s_l ∈ S_l) (|P_a(s, a, s_l) - P_a(s', a, s_l)| ≤ϵ, ∀ a ∈ A_a).
The ϵ-simulation in def. <ref> induces a hierarchical merging scheme. Let level l_0 contain the initial abstract states, which partition the explored concrete state space into a finite set of boxes – one per abstract state – which are labeled as either safe or unsafe. ϵ-similar adjacent states from level l_i are merged into a single state at level l_i+1, until no further merge is possible. Besides reducing the number of abstract states, and in turn the cost of generating counterexamples, this hierarchical merging scheme brings the indirect benefit of more aggressively merging abstract states corresponding to safe regions of the concrete state space, while preserving a finer-grained approximation of concrete state space regions in proximity of explored unsafe states, as discussed in Appendix <ref>.
Counterexample-guided Simulation. If a probabilistic safety requirement can be violated, one or more counterexamples M_cex can be generated from the abstract model M_a, where each counterexample includes a (near-)minimal subset of the abstract state-action space. We then use each counterexample as a guide to build an offline, simulation environment where the agent can update its Q-values towards avoiding eventually reaching an unsafe state.
Starting from the initial concrete state s_0, the abstract state s_cex in M_cex corresponding to the current concrete state is computed. By construction, each counterexample selects one action a from state s_cex. The abstract transition is randomly simulated according to the transition function P_cex from (s_cex, a) and an abstract destination state s_cex^' is identified. Such abstract state is concretised by sampling from the past concrete traces a transition (s, a, s^') where s ∈ s_cex and s^'∈ s^'_cex. If s^'_cex is an unsafe state, a penalty (negative reward; the impact of its magnitude is further discussed in Appendix tab. <ref>) is given to the agent, which has the transitive effect of re-weighting also the Q-value of the actions that led the agent to the current state. The simulation traces can be used by both Q-learning and DQN agents. The simulation terminates when an unsafe state is reached (penalty) or when we fail to concretise an abstract transition, which may happen when concrete safe states are misclassified as unsafe in the abstraction, but there is no actual transition to unsafe states from them. In the latter case, the simulation trace is discarded (no reward). While every simulation within the counterexample is designed to eventually reach an unsafe state with probability 1 by construction <cit.>, to avoid excessive length of a simulation, it can be practical to set an arbitrary, large bound on the maximum number of steps per run as additional termination criterion.
Multiple counterexamples, and corresponding simulations, can be generated up to a maximum simulation budget allowed by the user, adding blocking clauses in random order as described previously. Each counterexample is typically of small-size, which results in short simulation traces, thus reducing the overall cost of each offline learning experience.
§.§ Online-Offline Learning with Counterexample Guidance
Algorithm <ref> summarises the main steps of our online-offline learning method with counterexample guidance. We initially assume no knowledge about the environment is given to the agent: both the abstract model M_a and the set of explored paths D are empty (line <ref>). If prior knowledge was available, either in the form of an initial abstraction or of previously explored paths, M_a and D can be initialised accordingly.
Online learning. The procedure (line <ref>) lets the agent operate in the concrete environment with either tabular Q-Learning in discrete state space or DQN in continuous state space. We augment the exploration with a sequential Bayesian hypothesis testing (line <ref>) that monitors the frequency of violations of the safety requirement, by incrementally updating after each online episode a Beta distribution that estimates the probability of violation <cit.>. If the odds of such probability exceeding λ is larger than a prescribed Bayes factor β, the online learning phase is interrupted. The updated Q-values/Q-network of the agent are stored and the set of explored traces D is updated (line <ref>).
Offline learning. If the online learning phases has been interrupted because by the Bayesian test (line <ref>), an offline learning phase is triggered to reinforce the avoidance of discovered unsafe behaviors in future online exploration.
First, the abstraction M_a is updated with the current set of online traces D (line <ref>) to support the generation of current counterexamples. While there are theoretically a finite number of counterexample submodels <cit.>, for large M_a it could be computationally too expensive; instead, up to a maximum number N_cex of counterexamples is generated at each iteration. The addition of random blocking clauses (as described in sec. <ref>) will increase the diversity of the counterexamples within and across different offline learning phases.
The offline simulation traces synthesised from each M_cex (line <ref>) as described in the previous section are used by the agent to update its Q-values/Q-network (line <ref>), thus penalising the selection of eventually unsafe actions before the next online learning phase begins. Notice that the Bayesian hypothesis test is re-initialised before the next online learning phase (line <ref>) since the agent is expected to behave differently after an offline learning phase.
Discussion.
The interleaving of offline and online learning phases aims at reducing the frequency of unsafe events during the exploration of the environment. This goal is pursued by synthesising simulation environments from counterexamples to the safety requirement computed from an abstraction of the state space explored online. Notably, the offline phases never preclude the exploration of an action during the following online phases, rather they reduce the likelihood of selecting actions that may eventually lead to reaching unsafe states by lowering their Q-values. Due to space limitations, we report here the two main results related to the convergence of our abstraction method and of the online-offline learning process, and defer a more extensive discussion to Appendix <ref>.
Counterexample guidance relies on the abstraction constructed from online exploration phases, which classifies every region of the explored state space as either safe or unsafe. While by construction the abstraction preserves the safety invariant (every explored concrete unsafe state is mapped to an abstract unsafe state), the quality of the offline guidance relies also on controlling the misclassification error of safe concrete regions, which may unduly penalise the exploration of safe states.
propositionpropthree
The maximum misclassification error of a concrete safe state into an abstract unsafe state can eventually be reduced below an arbitrary bound 0 < u̅≤ 1 with probability at least 1-δ (0 < δ <1) throughout the exploration.
Further empirical analysis of the convergence of the abstraction and the impact of the abstraction parameters is provided in Appendix <ref>.
Finally, the following proposition states that the introduction of counterexample guidance does not preclude the convergence of the overall learning process to a maximal reward policy that satisfies the safety requirement, if such policy exists.
propositionpropfour
If there exist maximal-reward policies that satisfy the safety requirement, then the online-offline learning process eventually converges to one of them.
Further discussion of the convergence properties of the offline-online learning process are included in Appendix <ref>, including elaborating on the validity of the two propositions above.
In the next section, we instead report on our preliminary experimental evaluation of the performance of our counterexample-guided learning process.
Comment//
mycommfont
§ EVALUATION
In this section, we present a preliminary experimental evaluation of the performance of our method from two perspectives: 1) the improvement in the exploration safety rate, and 2) the impact on the cumulative reward achieved by the agent. Finally, we briefly discuss the overhead of counterexample guidance and make some observations on the policies it synthesises. Additional experimental results and discussion, including on abstraction effectiveness and sensitivity to hyperparameters can be found in Appendix <ref>.
Environments.
We consider four environments: from the implementation of <cit.>, the slippery from OpenAI Gym <cit.>, – where we change the state space from discrete to continuous with the same layout in <cit.>, and <cit.> (in particular, the exploration of the melas chasma in the Coprates quadrangle <cit.>). In all the environments, the agent decides a direction of move between , , , and . We define the objective of the agent as finding a policy with maximum Q-value while avoiding unsafe behaviours with intolerably high probability λ during exploration. Specifically, in the , the agent aims to find a walkable path in a 8x8 grid environment with slippery actions while avoiding entering states labelled with H. With the slippery setting, the agent will move in the intended direction with a probability of only 1/3 else will move in either perpendicular directions with equal probabilities of 1/3 respectively. In the and , the agent aims to reach the states labelled with and then states labelled with in a fixed order while avoiding entering any states labelled with along the path, with 15% probability of moving to random directions at every action. In , the distance covered in a move is also randomly sampled from a Gaussian 𝒩(2,0.5), thus choosing the same direction from a state may reach different states. In the environment, the agent aims to find one of the target states labelled with while avoiding reaching the unsafe regions, which in this scenario cannot be perfectly abstracted by boxes or other affine abstract domains. Following <cit.>, the distance covered in each move is sampled uniformly in the range (0, 10), with the addition of further uniform noise from 𝒰(-0.1, 0.5).
Baselines.
We compare the learning performance of our method with classical Q-Learning and DQN <cit.> as the baseline for discrete and continuous MDPs, respectively. For discrete MDPs, we further compare our method with <cit.> (referred to as QL-LCRL in the following), using the same set of hyper-parameters as provided in their implementation and the associated tool paper <cit.>. Given an automata corresponding to an LTL property, QL-LCRL guides the agent's exploration of an initially unknown MDP by reshaping on-the-fly the reward function to encourage the exploration of behaviors that satisfy such property. In this application QL-LCRL will encourage a safe exploration by discouraging reaching states.
Implementation and parameters. We implemented a standard tabular Q-learning in Python and used the DQN implementation from OpenAI Gym Baselines <cit.>. We parameterise the abstraction process with a learning rate α and a discount factor γ for the Q-value/Q-Network updates, and ϵ for adjacent ϵ-bisimulation. The agents move within Cartesian planes of sizes |S| and the minimisation of the abstract models reduces the state space to |S_a|, where the minimum size of each box is set to 1 and 0.01 for the discrete and the continuous environments, respectively. We set the safety specification parameter λ according to the intrinsic uncertainty in the respective environments. A summary of the parameters used for each environment is reported in the left side of tab. <ref> (additional parameters are discussed in Appendix tab. <ref> to ease reproducibility). We require at least 50 samples to be collected by the Bayesian hypothesis test before it can trigger offline training to reduce false positive triggers.
Experimental Results.
Fig. <ref> shows the accumulated safety rates (bottom) and the rolling average of accumulated rewards, indicating the real-time learning performance of the agent. The line is the average across 10 runs, while the shaded region around it is the standard deviation. We do not provide any prior information to the agent.
The dashed vertical lines in the figure indicate the average episode number where an offline learning phase in QL/DQN-CEX (our method) is triggered, and the solid horizontal line indicates the target safety rate in corresponding safety specifications. The cumulative rewards of different methods converge to similar values, demonstrating that the performance under guidance (QL-LCRL and Q-CEX) achieves cumulative rewards comparable to the baseline QL and DQN method.
As expected, providing additional guidance to discourage actions that may lead to reaching unsafe states with QL/DQN-CEX or QL-LCRL improves the safety rate of the exploration, with QL/DQN-CEX achieving on average higher safety rates.
In , the safety rate of online learning in our method exceeds the threshold much faster than other methods. This is due to the rapid convergence of the abstraction thanks to the grid layout that can be accurately and efficiently abstracted (and minimised). In turn, no further offline phases were required after the first 1000 episodes.
In , more online exploration is required to support comprehensive counterexample guidance, partly due to the high uncertainty in the outcome of the actions. Another phenomenon due to high uncertainty is that the agent takes a longer time to reach a stable performance, therefore the offline learning phase is triggered more frequently compared with other environments and also more episodes were required to stably satisfy the safety requirement.
In and , while the baseline eventually achieves a marginally higher cumulative reward in MarsRover, DQN-CEX achieves a higher safety rate – the number of failures experienced by the agent is inversely proportional to the integral of the safety rate curve, which results in significantly fewer failure events. The frequency of offline training phases also decreases over time. This is not surprising due to a safer exploration being possibly less speculative and slower in exploring the optimal policy. QL-LCRL is not applicable to and . Although LCRL <cit.> applies NFQ-based method for continuous MDPs, NFQ-LCRL trains the agent in a completely offline manner based on randomly sampled data, thus it is not suitable for comparison with our method from the perspective of safe exploration.
Overhead. Counterexamples generation requires solving a MILP problem on the abstract state space. On a Macbook Air with M1 CPU and 8Gb of memory, the average±stdev time to solve the optimization problems were 0.71±0.98s, 0.08±0.06s, 2.31±3.94s, and 3.87±4.07s for , ,
, and , respectively. We used Gurobi Optimiser v9.1.0 <cit.> off-the-shelf. The numbers of counterexamples generated for each offline learning phase were, on average: 11, 6, 19, and 21, respectively. Notice that both counterexample generation and the simulations can be parallelised. While the cost of solving the MILP may be higher, we notice that offline learning is triggered only when the recent online exploration resulted in an unacceptable failure rate. As shown in fig. <ref>, thanks to counterexample-guidance, QL/DQN-CEX can achieve the required safety exploration rate much faster, which helps amortizing the initial MILP solution cost. Finally, the optimisation problem might be relaxed to sacrifice the optimality of the solution (i.e., size of the counterexamples) for computation time.
§ RELATED WORK
Safe Exploration.
<cit.> provide surveys and taxonomies of recent safe reinforcement learning methods with different emphases. Most existing safe exploration methods assume prior knowledge about the environment or the agent's dynamics <cit.>, known safety constraints <cit.>, or utilise expert guidance <cit.> to provide guarantees of safety constraints satisfaction during exploration. A different class of methods <cit.> utilises surrogate Gaussian process models to characterise the unknown dynamics and optimise the unknown function with a prior model structure. There are only a few methods <cit.> tackling safe exploration without known model structure. <cit.> focuses on solving continuous, unknown MDPs into sub-tasks using an online RL framework under LTL specification with less emphasis on safety rate but blocking actions unsafe according to the specification, and <cit.> trains a safety layer/critic used for filtering out probabilistic unsafe actions with an offline dataset. While motivated by the same idea of safer exploration without prior knowledge, our method can be initialised with none or any amount of previously explored paths, meanwhile it converged to cumulative rewards comparably or better than baseline methods.
Offline RL.
Offline RL can be seen as a data-driven formulation of RL, where the agent collects transitions using the behaviour policy instead of interacting with the environment <cit.>. The biggest challenge in offline RL is the bootstrapping error: the Q-value is evaluated with little or no prior knowledge and propagated through Bellman equation <cit.>. <cit.> regularise the behaviour policy while optimising to address this issue. <cit.> alternatively update the Q values in more conservative ways to learn a lower bound of the Q function using uncertainty estimated with the sampled information. From the safe RL perspective, <cit.> optimises a risk-averse criterion using data previously collected by a safe policy offline, and <cit.> learns a constrained policy maximizing the long-term reward based on offline data without interaction in the concrete environment using a constrained penalised Q-Learning method. These methods have similar motivation of combining offline learning with risk-averse RL as ours, while focusing on continuous control setting instead. Besides utilising offline learning to reduce unsafe online exploration, we alternate online and offline learning to keep the abstract knowledge about the environment up to date and increase the risk aversion of the agent based on the most current evidence it collected online.
§ CONCLUSION
We presented our investigation of a safer model-free reinforcement learning method using counterexample-guided offline training.
We proposed an abstraction strategy to represent the knowledge acquired during online exploration in a succinct, finite MDP model that can consistently and accurately describe safety-relevant dynamics of the explored environment. Counterexample generation methods from probabilistic model checking are then adapted to synthesise small-scale simulation environments capturing scenarios in which the decisions of the agent may lead to the violation of safety requirements. The agent can then train offline within this minimal submodels by replaying concrete transitions recorded during past online exploration consistent with the counterexample, using a reward scheme focused on reducing the likelihood of selecting actions that may eventually lead to visiting again explored unsafe concrete states. The q-values penalized during the offline training phases implicitly reduce the risk of repeating unsafe behaviors during subsequent online exploration, while newly explored paths feedback information to the next offline learning phase.
The alternation of online exploration – and abstraction refinement – and counterexample guided learning can ultimately lead to higher safety rates during exploration, without significant reduction in the achieved cumulative reward, as demonstrated in our preliminary evaluation on problems from previous literature and the OpenAI Gym. While this paper focused on improving Q-Learning (and the related DQN algorithm), the fundamental framework is not specific to Q-Learning, and we plan to explore its impact on other learning algorithms in future work.
Data availability. An artifact including the prototype Python implementation used for the experiments has been accepted by QEST 2023 artifact evaluation. The implementation of our method is available at Github: <https://github.com/xtji/CEX-guided-RL>.
splncs04
10
achiam2017constrained
Achiam, J., Held, D., Tamar, A., Abbeel, P.: Constrained policy optimization.
In: International Conference on Machine Learning. pp. 22–31. PMLR (2017)
alshiekh2018safe
Alshiekh, M., Bloem, R., Ehlers, R., Könighofer, B., Niekum, S., Topcu, U.:
Safe reinforcement learning via shielding. In: Thirty-Second AAAI Conference
on Artificial Intelligence (2018)
baier2008principles
Baier, C., Katoen, J.P.: Principles of model checking. MIT press (2008)
bellman1957markovian
Bellman, R.: A markovian decision process. Journal of mathematics and mechanics
pp. 679–684 (1957)
bharadhwaj2020conservative
Bharadhwaj, H., Kumar, A., Rhinehart, N., Levine, S., Shkurti, F., Garg, A.:
Conservative safety critics for exploration. arXiv preprint arXiv:2010.14497
(2020)
blumer1989learnability
Blumer, A., Ehrenfeucht, A., Haussler, D., Warmuth, M.K.: Learnability and the
vapnik-chervonenkis dimension. Journal of the ACM (JACM) 36(4),
929–965 (1989)
brockman2016openai
Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang,
J., Zaremba, W.: Openai gym. arXiv preprint arXiv:1606.01540 (2016)
brunke2022safe
Brunke, L., Greeff, M., Hall, A.W., Yuan, Z., Zhou, S., Panerati, J.,
Schoellig, A.P.: Safe learning in robotics: From learning-based control to
safe reinforcement learning. Annual Review of Control, Robotics, and
Autonomous Systems 5, 411–444 (2022)
bshouty1998noise
Bshouty, N.H., Goldman, S.A., Mathias, H.D., Suri, S., Tamaki, H.:
Noise-tolerant distribution-free learning of general geometric concepts.
Journal of the ACM (JACM) 45(5), 863–890 (1998)
buckman2020importance
Buckman, J., Gelada, C., Bellemare, M.G.: The importance of pessimism in
fixed-dataset policy optimization. arXiv preprint arXiv:2009.06799 (2020)
vcevska2019counterexample
Češka, M., Hensel, C., Junges, S., Katoen, J.P.:
Counterexample-driven synthesis for probabilistic program sketches. In:
International Symposium on Formal Methods. pp. 101–120. Springer (2019)
dalal2018safe
Dalal, G., Dvijotham, K., Vecerik, M., Hester, T., Paduraru, C., Tassa, Y.:
Safe exploration in continuous action spaces. arXiv preprint arXiv:1801.08757
(2018)
desharnais2008approximate
Desharnais, J., Laviolette, F., Tracol, M.: Approximate analysis of
probabilistic processes: Logic, simulation and games. In: 2008 Fifth
International Conference on Quantitative Evaluation of Systems. pp. 264–273.
IEEE (2008)
downey2021think
Downey, A.: Think Bayes. O'Reilly Media (2021),
<https://books.google.com/books?id=Vh4vEAAAQBAJ>
filieri2014statistical
Filieri, A., Păsăreanu, C.S., Visser, W., Geldenhuys, J.:
Statistical symbolic execution with informed sampling. In: Proceedings of the
22nd ACM SIGSOFT International Symposium on Foundations of Software
Engineering. pp. 437–448 (2014)
FultonPlatzer2018
Fulton, N., Platzer, A.: Safe reinforcement learning via formal methods: Toward
safe control through proof and learning. Proceedings of the AAAI Conference
on Artificial Intelligence 32(1) (Apr 2018)
garcia2012safe
Garcia, J., Fernández, F.: Safe exploration of state and action spaces in
reinforcement learning. Journal of Artificial Intelligence Research
45, 515–564 (2012)
garcia2015comprehensive
Garcıa, J., Fernández, F.: A comprehensive survey on safe reinforcement
learning. Journal of Machine Learning Research 16(1), 1437–1480
(2015)
gurobi
Gurobi Optimization, LLC: Gurobi Optimizer Reference Manual (2022),
<https://www.gurobi.com>
counterexampleGeneration2009
Han, T., Katoen, J.P., Berteun, D.: Counterexample generation in probabilistic
model checking. IEEE Transactions on Software Engineering 35(2),
241–257 (2009). 10.1109/TSE.2009.5
hansson1994logic
Hansson, H., Jonsson, B.: A logic for reasoning about time and reliability.
Formal aspects of computing 6(5), 512–535 (1994)
hasanbeig2018logically
Hasanbeig, M., Abate, A., Kroening, D.: Logically-constrained reinforcement
learning. arXiv preprint arXiv:1801.08099 (2018)
lcrl_tool
Hasanbeig, M., Kroening, D., Abate, A.: LCRL: Certified policy synthesis via
logically-constrained reinforcement learning - implementation,
<https://github.com/grockious/lcrl>
hasanbeig2020deep
Hasanbeig, M., Kroening, D., Abate, A.: Deep reinforcement learning with
temporal logics. In: Bertrand, N., Jansen, N. (eds.) Formal Modeling and
Analysis of Timed Systems. pp. 1–22. Springer, Cham (2020).
10.1007/978-3-030-57628-8
HasanbeigKA22
Hasanbeig, M., Kroening, D., Abate, A.: LCRL: certified policy synthesis via
logically-constrained reinforcement learning. In: Ábrahám, E.,
Paolieri, M. (eds.) Quantitative Evaluation of Systems - 19th International
Conference, QEST 2022, Warsaw, Poland, September 12-16, 2022, Proceedings.
Lecture Notes in Computer Science, vol. 13479, pp. 217–231. Springer (2022).
10.1007/978-3-031-16336-4_11,
<https://doi.org/10.1007/978-3-031-16336-4_11>
huang2018learning
Huang, J., Wu, F., Precup, D., Cai, Y.: Learning safe policies with expert
guidance. Advances in Neural Information Processing Systems 31
(2018)
jansen2018shielded
Jansen, N., Könighofer, B., Junges, S., Bloem, R.: Shielded decision-making
in mdps. arXiv preprint arXiv:1807.06096 (2018)
kim2020safe
Kim, Y., Allmendinger, R., López-Ibáñez, M.: Safe learning and
optimization techniques: Towards a survey of the state of the art. In:
International Workshop on the Foundations of Trustworthy AI Integrating
Learning, Optimization and Reasoning. pp. 123–139. Springer (2020)
kumar2019stabilizing
Kumar, A., Fu, J., Soh, M., Tucker, G., Levine, S.: Stabilizing off-policy
q-learning via bootstrapping error reduction. Advances in Neural Information
Processing Systems 32 (2019)
kumar2020conservative
Kumar, A., Zhou, A., Tucker, G., Levine, S.: Conservative q-learning for
offline reinforcement learning. Advances in Neural Information Processing
Systems 33, 1179–1191 (2020)
lawler1966branch
Lawler, E.L., Wood, D.E.: Branch-and-bound methods: A survey. Operations
research 14(4), 699–719 (1966)
levine2020offline
Levine, S., Kumar, A., Tucker, G., Fu, J.: Offline reinforcement learning:
Tutorial, review, and perspectives on open problems. arXiv preprint
arXiv:2005.01643 (2020)
liu2020robust
Liu, A., Shi, G., Chung, S.J., Anandkumar, A., Yue, Y.: Robust regression for
safe exploration in control. In: Learning for Dynamics and Control. pp.
608–619. PMLR (2020)
mason2017assured
Mason, G.R., Calinescu, R.C., Kudenko, D., Banks, A.: Assured reinforcement
learning with formally verified abstract policies. In: 9th International
Conference on Agents and Artificial Intelligence (ICAART). York (2017)
mcewen2014recurring
McEwen, A.S., Dundas, C.M., Mattson, S.S., Toigo, A.D., Ojha, L., Wray, J.J.,
Chojnacki, M., Byrne, S., Murchie, S.L., Thomas, N.: Recurring slope lineae
in equatorial regions of Mars. Nature geoscience 7(1), 53–58
(2014). 10.1038/ngeo2014
mnih2013playing
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra,
D., Riedmiller, M.: Playing atari with deep reinforcement learning. arXiv
preprint arXiv:1312.5602 (2013)
mnih2015human
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G.,
Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., et al.:
Human-level control through deep reinforcement learning. nature
518(7540), 529–533 (2015)
moldovan2012safe
Moldovan, T.M., Abbeel, P.: Safe exploration in markov decision processes.
arXiv preprint arXiv:1205.4810 (2012)
openaibaselinesdqn
OpenAI: Stable baselines version 3 - dqn,
<https://stable-baselines3.readthedocs.io/en/master/modules/dqn.html>
pham2018optlayer
Pham, T.H., De Magistris, G., Tachibana, R.: Optlayer-practical constrained
optimization for deep reinforcement learning in the real world. In: 2018 IEEE
International Conference on Robotics and Automation (ICRA). pp. 6236–6243.
IEEE (2018)
prakash2019improving
Prakash, B., Khatwani, M., Waytowich, N., Mohsenin, T.: Improving safety in
reinforcement learning using model-based architectures and human
intervention. In: The Thirty-Second International Flairs Conference (2019)
sharma2013verification
Sharma, R., Gupta, S., Hariharan, B., Aiken, A., Nori, A.V.: Verification as
learning geometric concepts. In: International Static Analysis Symposium. pp.
388–411. Springer (2013)
siegel2020keep
Siegel, N.Y., Springenberg, J.T., Berkenkamp, F., Abdolmaleki, A., Neunert, M.,
Lampe, T., Hafner, R., Heess, N., Riedmiller, M.: Keep doing what worked:
Behavioral modelling priors for offline reinforcement learning. arXiv
preprint arXiv:2002.08396 (2020)
abstractDomains
Singh, G., Püschel, M., Vechev, M.: A practical construction for
decomposing numerical abstract domains. Proc. ACM Program. Lang.
2(POPL) (dec 2017). 10.1145/3158143,
<https://doi.org/10.1145/3158143>
stooke2020responsive
Stooke, A., Achiam, J., Abbeel, P.: Responsive safety in reinforcement learning
by pid lagrangian methods. In: International Conference on Machine Learning.
pp. 9133–9143. PMLR (2020)
sui2015safe
Sui, Y., Gotovos, A., Burdick, J., Krause, A.: Safe exploration for
optimization with gaussian processes. In: International conference on machine
learning. pp. 997–1005. PMLR (2015)
tessler2018reward
Tessler, C., Mankowitz, D.J., Mannor, S.: Reward constrained policy
optimization. arXiv preprint arXiv:1805.11074 (2018)
urpi2021risk
Urpí, N.A., Curi, S., Krause, A.: Risk-averse offline reinforcement
learning. arXiv preprint arXiv:2102.05371 (2021)
wachi2018safe
Wachi, A., Sui, Y., Yue, Y., Ono, M.: Safe exploration and optimization of
constrained mdps using gaussian processes. In: Proceedings of the AAAI
Conference on Artificial Intelligence. vol. 32 (2018)
watkins1992q
Watkins, C.J., Dayan, P.: Q-learning. Machine learning 8(3),
279–292 (1992)
wimmer2013high
Wimmer, R., Jansen, N., Vorpahl, A., Ábrahám, E., Katoen, J.P., Becker,
B.: High-level counterexamples for probabilistic automata. In: International
Conference on Quantitative Evaluation of Systems. pp. 39–54. Springer (2013)
wu2019behavior
Wu, Y., Tucker, G., Nachum, O.: Behavior regularized offline reinforcement
learning. arXiv preprint arXiv:1911.11361 (2019)
xu2022constraints
Xu, H., Zhan, X., Zhu, X.: Constraints penalized q-learning for safe offline
reinforcement learning. In: Proceedings of the AAAI Conference on Artificial
Intelligence. vol. 36, pp. 8753–8760 (2022)
zhou2018safety
Zhou, W., Li, W.: Safety-aware apprenticeship learning. In: International
Conference on Computer Aided Verification. pp. 662–680. Springer (2018)
§ APPENDIX
§.§ Minimal Counterexamples Generation
Counterexample of the Safety Specification: The solution of the following optimisation problem (adapted from <cit.>) is a minimal counterexample of the safety specification P_≤λ [F ]:
minimise -1/2ω_0 p_s_0 + ∑_ℓ∈ Lω(ℓ)x_ℓ, , such that
p_s_0 > λ
∀ s ∈ T. p_s = 1
∀ s ∈ S ∖ T. ∑_a ∈ P(s) π_s, a≤ 1
∀ s ∈ S ∖ T. p_s≤∑_a ∈ P(s) π_s, a
∀ s ∈ S ∖ T, ∀ a ∈ A, ∀ℓ∈ L(s, a, s'). p_s, a, s'≤ x_ℓ
∀ s ∈ S ∖ T, ∀ a ∈ A, p_s, a, s'≤ P(s,a,s') · p_s'
∀ s ∈ S ∖ T, ∀ a ∈ A. p_s≤ (1 - π_s, a) + ∑_s' : P(s,a,s')>0 p_s, a, s'
∀ (s, a) ∈ P_T^Prob . π_s, a = ∑_ℓ∈ L(s, a, s') x_ℓ
∀ (s, a) ∈ P_T^Prob ∀ℓ∈ L(s, a, s'). r_s < r_s' + (1 - x_ℓ)
where S is the state space of the model, T is the set of states labelled as , p_s represents the probability of reaching any state in T from state s, π_s,a∈{0,1} indicates that action a is selected in state s, p_s,a,s' represents the probability contribution of the transitions from s to s' via action a, where this transition is selected to be part of the counterexample, ℓ∈ L(s,a,s') is the label identifying a transition from s to s' via action a (not to be confused with the function L labeling states of the model) such that x_l=1 iff the transition is included in the counterexample, x_l=0 otherwise, and P_T^Prob represents the set of problematic state-action pairs if the minimal probability of reaching the target state is zero and the maximal probability of reaching the target state is non-zero from these state-action pairs.
Intuitively, the optimisation problem aims to find a policy π that will make the agent violate the safety specification within the smallest counterexample sub-model, composed of all the transitions (s,a,s') from the original model whose corresponding x_l is 1.
To this goal, eq. <ref> requires the probability of reaching an unsafe state from the initial state s_0 to be >λ (violation of the safety specification); eq. <ref> fixes p_s=1 for all the unsafe states; eq. <ref> impose the agent to select at most one action a for each state s; if no action is chosen in a state s, eq. <ref> ensures that p_s=0. Other constraints for ensuring the minimal size and prevent deadlock loops are also defined <cit.>. We further specialise the obejctive based on the approach in <cit.> by setting the weights ω(ℓ) of the selector variables x_ℓ as one minus the normalised Q-values corresponding to the state-action pair in ℓ, while ω_0 > max{ω(ℓ) |∀ℓ∈ L_c∧ω(ℓ) > 0 }. With this weighting, we encourage the selection of labels with larger Q-Value, i.e., corresponding to the violating behaviours most likely to be selected by the agent, while at the same time minimising the size of the sub-model. Eq. <ref> ensures that the contribution of the transition (s,a,s') is 0 if such transition is not included in the counterexample, otherwise, eq. <ref>, p_s,a,s' is bounded by the probability of the corresponding transitions in the model – P(s,a,s')) times the probability of reaching the target from s' p_s'. Eq. <ref> ensures that if action a is selected in state s, the probability of reaching T from s is bounded by the sum of the probabilities of reaching T from its successors given a. Finally, two additional constraints are defined in <cit.> to prevent the agent from getting stuck in an infinite loop that would prevent it from reaching T.
Compared to the general solution in <cit.>, for the case of tabular Q-Learning, we heuristically specialise the objective function by assigning to the selection of a state-action pair (s,a) a cost proportional to -Q̅(s,a), where Q̅(s,a) is the normalised where this average Q-value of a over the concrete states represented by an abstract state. Because the minimal size counterexample is, in general, not unique, this additional cost prioritises the inclusion of actions with larger Q-Value, i.e., actions most likely to be selected by the agent, while at the same time minimising the size of the sub-model.
§.§ On the Convergence of the Abstraction and Learning Processes
In this section we will discuss the main convergence aspects of the proposed abstraction method and of the alternating online-offline learning process.
Convergence of the abstraction.
The core of the proposed abstraction of safety relevant aspects of the explored concrete state space revolves around the solution of a minimal red-blue set coverage by means of polyhedra (restricted to hyperboxes in this work) predicates introduced in sec. <ref>. We aim to discuss how an abstraction up to an arbitrary accuracy will almost surely eventually be constructed. We remind that the set cover problem allows only the overapproximation of unsafe regions, with unsafe concrete points always mapped to unsafe abstract states, while safe concrete points may possibly be misclassified as unsafe with a maximum prescribed false positive rate.
For simplicity, let us focus the discussion on learning for MDPs with discrete state space. Because we allow hyperboxes of minimum size d>0, a continuous space is implicitly discretised, with every sample from a continuous space included within a box of size d or larger.
propositionprop1
Every reachable concrete state will eventually be reached with probability 1 during online exploration, in particular every reachable unsafe state will eventually be reached.
This follows from the fact that the online exploration of the environment is never strictly limited by the use of counterexample guidance. Rather, offline phases aim at reducing the relative Q-value of actions that may eventually lead to the violation of the safety requirement, thus reducing the likelihood of their selection during online learning. Because the agent is always allowed, with a controllable probability, to explore any action from the current state, every reachable state maintains a strictly positive probability of being reached.
propositionprop2
Every reachable unsafe state will eventually be mapped to an unsafe abstract state.
Proposition <ref> follows from Proposition <ref> and the preservation of the safety invariant, which ensures unsafe concrete states are always mapped to unsafe abstract states. Finally, restated from sec. <ref>:
*
Proposition <ref> relies on a PAC learnability argument. At any time during the exploration, the maximal explored state is bounded by the most extreme states that have been explored.
During exploration, the agent can select randomly an action among those available in the current state with probability ϵ_QL > 0. This exploration probability may change over time, but should always be strictly larger than zero to ensure every state-action pair can be selected infinitely often to ensure the convergence of Q-Learning (cf. sec. <ref>). Let ϵ_QL > 0 be the lowerbound of the values ϵ_QL can take during exploration. Then, given ϵ_QL and the concrete MDP's transition relation, the probability of visiting any (reachable) concrete state is bounded from below by a value p_QL.
Recalling Theorem 2.1 in <cit.>, if a learning concept L has a finite Vapnik–Chervonenkis (VC) dimension, and if L is consistent with a uniform sample size max(4/u̅log2/δ, 8VC/u̅log13/u̅), then the prediction error of the learning concept L can be limited to u ∈ (0, 1], with a probability of at least 1-δ.
The expressiveness of L and also VC dimension of the learning concept is decided by the abstraction domain. For the set coverage learning concept in def. <ref>, the general VC dimension of a single predicate of the form C_i = { | ω+ ≤ 0} is finite. The actual VC dimension of specific abstract domain and scenario can be easily verified, e.g. the VC dimension of an axis-parallel box is v=4.
According to Lemma 3.2.3 in <cit.>, the VC dimension of a union set C of convex polygons C_i is less than 2vslog(3s), for all s ≥ 1, where s is the number of sets in C. Hence, the required sample size to limit the abstraction error u ≤u̅ for the set cover solution C is max(4/u̅log2/δ, 16vslog(3s)/u̅log13/u̅). By underapproximating the number of samples considering conservatively that each concrete state has probability p_QL of being sampled, we can conclude that an abstraction with misclassification error less than a prescribed u̅ can eventually be learned with arbitrary probability 1 - δ.
These upper bounds are typically conservatively above the actual number of sampled paths required for most practical scenarios and mainly aim at ensuring the asymptotic convergence of the abstraction process. In sec. <ref>, we will demonstrate empirically on some of the experimental environments how the actual abstraction converges to a prescribed maximum misclassification error for different configuration hyperparameters.
Caveats and practical limitations.
Because our abstract domain uses boxes (with sides parallel to the axes of the state space domain), it is likely to miscalssify safe regions when their boundaries cannot be covered exactly with a union of boxes. In general, a similar argument can be formulated for any finite-accuracy abstract domain. The corner case of using this abstraction is that small safe regions located between two unsafe ones placed at a distance, along any dimension, smaller than the minimum size of a box d may remain misclassified even if the agent happens to sample a concrete point that could discriminate them. In turn, if the optimal policy requires passing through one such small misclassified region, counterexample guidance could reduce the likelihood of the agent exploring it – but never entirely prevent its exploration. In practice, the problem can be mitigated choosing a smaller value of d, or using an abstract domain with a more appropriate performance/accuracy tradeoff for the problem at hand, e.g., octagons or polyhedra.
Convergence of the online-offline learning process.
The main aim of offline learning is to discourage the re-exploration of policies that lead to violation of the safety requirement by penalizing the q-values of the involved actions. The penalization of explored unsafe behaviors has the effect of encouraging the exploration of alternative actions in a state, because their q-values “grow” relatively to the q-values of the penalized actions and are thus more likely to be selected next. An underlying assumption for the stability of the method is that there exist an optimal policy that satisfies the safety requirement. In the following, we will discuss how introducing offline learning does not prevent convergence to such a policy (in fact, as demonstrated experimentally, it accelerates the convergence to it), including sufficient conditions for such convergence to occur. Finally, we will discuss what may happen if all optimal policies violate the safety requirement. Restated from sec. <ref>:
*
The online learning phases are bound to eventually converge to an optimal policy as long as each state-action pair can be visited infinitely often. The offline phases can decrease the relative likelihood of actions involved with policies leading to a violation of the safety requirement, but never prevent their exploration altogether. As a result, when the online phases converge to an optimal policy which satisfies the safety requirement, and assuming the abstract model converged as well, no further offline phases will be triggered (except for possible occasional false positive triggers from the Bayesian hypothesis test). This happened in all our experiments, where occasional offline phases triggered after the agent converged to mainly exploring policies that satisfy the safety requirement could introduce transient fluctuations in the cumulative reward but do not affect convergence to maximum reward in the long run.
If there exist no maximal-reward policy satisfying the safety requirement, the introduction of offline learning may result in oscillations in the q-values of the actions involved in the maximal-reward policy discovered by the agent. Such oscillations arise from the fact that the maximal-reward policy the online phase converged to is itself, by hypothesis, a counterexample to the safety requirement. In this situation, our method may still reduce the number of failures during exploration by reducing the frequency at which the agent explores the maximal-reward policy, but it will not prevent the agent from converging to such maximal-reward policy in the long run. (Notice that if an alternative policy that achieves the same expected reward while satisfying the safety requirement existed, the introduction of offline learning would encourage its discovery, as discussed in Proposition <ref>; however, we are here assuming such policy does not exist.) In this situation, the designer has to accept the need to relax the safety requirement or decide whether to privilege reward over safety, which can be obtained by, e.g., limiting the number of times the same counterexample can be used in an offline phase. This situation is mentioned for the sake of completeness, but it falls outside the scope of this paper, where it is assumed that a maximal-reward policy that satisfies the safety requirement exists and its learning is accelerated via counterexample guidance.
§.§ Empirical Evaluation of the Abstraction and Sensitivity to Hyperparameters
In this section, we provide supplemental details about the abstraction evaluation, and the sensitivity of hyperparamters in the quality of abstraction and the learning outcomes. We first provide sets of additional hyperparamters in the baseline QL/DQN and our method to ease reproducibility. Then, we illustrate a comprehensive exploration of the concept of incremental abstraction, starting with a concrete example that demonstrates how it functions. This illustration will elucidate the process and its influence on reinforcement learning systems. Next, we analyse the robustness of our learning results with respect to hyperparameter tuning. This analysis will be twofold, focusing on both online and offline related hyperparameters. We aim to present a critical examination of how these parameters affect the quality of the abstraction model and also the overall learning performance.
Additional Hyperparamters
All common hyper-parameters used with QL, DQN and Q-CEX are the same as listed in tab. <ref>.For DQN specific addtional hyper-parameters, we use the default values given in <cit.>. For QL-LCRL-specific additional hyper-parameters, we use the same values as <cit.>.
Incremental Geometric Abstraction. When new unsafe states are discovered during online exploration, which were previously abstracted as safe, the abstract state space is updated incrementally using a branch-and-bound strategy to separate the new unsafe point. We demonstrate
this process taking as example the Frozenlake8x8 environment in fig. <ref>. The initial abstract MDP, used for counterexample generation, after the first 50 online episodes is shown in fig.<ref>. With the increasing number of explored points, more safety-relevant information can be acquired, and we employ the branch and bound method incrementally to cover the newly discovered unsafe states and refine the safe states adjacent to the newly discovered unsafe regions, as shown in fig.<ref>. The ground-truth layout is included in fig. <ref> for reference, where indicates unsafe regions.
Robustness of CEX-guided RL against hyperparamter tuning. We assessed the robustness of our proposed method in terms of hyperparameter tuning, focusing on two main aspects: the quality of the resulting abstraction model and the learning performance, measured by accumulated safety rates and average rolling rewards.
To evaluate simulation model quality, we present results using varying false positive rates (FPR) and ϵ for ϵ-bisimulation merging in both Frozenlake8x8 in fig <ref> and MarsRover environments in fig. <ref>. A lower FPR results in a more precise abstraction, which includes less safe concrete states from the unsafe abstract states (in environments like MarsRover, safe concrete states cannot be completely excluded with boxes).
Smaller ϵ values lead to less merging of similar states in the abstract state space. We recall the hierarchical ϵ-simulation concept, demonstrating that its inherent structure provides a beneficial trade-off between computational complexity and accuracy in proximity to unsafe states. Because the merge operation is constrained to states with identical labels and follows a hierarchical fashion (i.e., states from lower levels merge into higher ones, with transition probabilities at the higher level normalised by the sum from merged lower-level states), a lesser degree of merge due to ϵ-simulation is anticipated in the vicinity of unsafe states.
This hierarchical process indirectly optimises the resolution in the state space where it is most crucial, specifically near unsafe states, while permitting a coarser merge in safer regions. Such non-uniform merging fosters a more efficient balance between the computational complexity of the counterexample generation optimisation problem, which depends on the number of states in the abstraction, and the effectiveness of the counterexamples in accurately selecting concrete transitions from the agent's past experience near unsafe concrete regions.
Intuitively, given a specific ϵ, the final minimised abstraction is expected to be more precise near states labeled as unsafe due to the merge operation's restriction to adjacent abstract states. In these areas, a limited number of steps and corresponding traversal of adjacent states suffices to differentiate the trajectory outcome under consideration. In contrast, a coarser abstraction is produced when more steps are required to determine the trajectory's outcome. This characteristic of the hierarchical ϵ-simulation provides a tailored abstraction mechanism that varies with the safety of the region, enhancing the effectiveness of the abstract model.
Regarding learning performance, we demonstrate the average accumulated safety rate and the average rolling reward are robust under different hyperparameters, by varying the penalty and number of offline episodes for each simulation model during offline learning phase, and varying the Bayes factor and safety check interval during the online exploration in tab. <ref>.
To estimate the probability of safety requirement violation and trigger offline learning when necessary, we utilise the Bayesian hypothesis testing estimator <cit.> with the Bayes factor as a hyperparamter. This estimator can be initialised with a minimum number of samples prior to accepting its decision, and then determines whether the safety requirements are satisfied or violated based on the number of samples collected during the most recent online learning session. If the likelihood of violating the safety requirement significantly surpasses that of satisfying it, the offline learning phase is subsequently initiated.
Within the online learning session, we assess the Bayes factor of these two hypotheses, represented as P(H_0|S)/P(H_1|S), using the real-time experience dataset. The magnitude of the given Bayes factor decides the frequency of triggering offline learning. This frequency setting engenders a trade-off between safety and computational cost: higher frequency leads to increased computational cost. A high-frequency setting may also induce over-conservative learning performance. This happens when excessive offline learning guidance inadvertently penalises the learning process that entails a reasonable level of risk. In corner cases with very small Bayes factors, the agent may opt to explore solely safe regions, failing to achieve the learning objective.
|
http://arxiv.org/abs/2307.04027v1 | 20230708182951 | Slow-roll inflation and growth of perturbations in Kaniadakis Cosmology | [
"Gaetano Lambiase",
"Giuseppe Gaetano Luciano",
"Ahmad Sheykhi"
] | gr-qc | [
"gr-qc",
"hep-ph",
"hep-th"
] | |
http://arxiv.org/abs/2307.05079v1 | 20230711072520 | Decay Pattern of Pygmy States Observed in Neutron-Rich 26 Ne | [
"J. Gibelin",
"D. Beaumel",
"T. Motobayashi",
"Y. Blumenfeld",
"N. Aoi",
"H. Baba",
"Z. Elekes",
"S. Fortier",
"N. Frascaria",
"N. Fukuda",
"T. Gomi",
"K. Ishikawa",
"Y. Kondo",
"T. Kubo",
"V. Lima",
"T. Nakamura",
"A. Saito",
"Y. Satou",
"J. -A. Scarpaci",
"E. Takeshita",
"S. Takeuchi",
"T. Teranishi",
"Y. Togano",
"A. M. Vinodkumar",
"Y. Yanagisawa",
"K. Yoshida"
] | nucl-ex | [
"nucl-ex",
"nucl-th"
] |
[email protected] address: LPC Caen, Université de Caen, F-14050 Caen Cedex, France
Coulomb excitation of the exotic neutron-rich nucleus 26 on a
208 target was measured at 58 MeV/u in order to search for
low-lying E1 strength above the neutron emission threshold.
This radioactive beam experiment was carried out at the RIKEN
Accelerator Research Facility.
Using the invariant mass method in the 25+n channel, we
observe a sizable amount of E1 strength between 6 and 10 MeV
excitation energy. By performing a multipole decomposition of the
differential cross-section, a reduced dipole transition probability
of B(E1)=0.49±0.16 1 is deduced, corresponding to
4.9±1.6% of the Thomas-Reiche-Kuhn sum rule. For the first
time, the decay pattern of low-lying strength in a neutron-rich
nucleus is measured. The extracted decay pattern is not consistent
with several mean field theory descriptions of the pygmy states.
24.30.Gd, 24.30.Cz, 25.70.De
Decay Pattern of Pygmy States Observed in Neutron-Rich 26
K. Yoshida
August 12, 2023
=========================================================
The advent of beams of atomic nuclei with large neutron/proton ratios
has offered the possibility to investigate new phenomena associated
with the excess of neutrons. An often quoted property of such exotic
nuclei is the halo effect, an abnormal extension of matter
distribution observed for the first time in light neutron rich
isotopes in the mid 80's <cit.>. Beyond static
properties, the question of the occurrence of new dynamical modes
associated with the excess neutrons has been investigated both
theoretically and experimentally. Predictions in favour of such modes
have been given in the early 90's <cit.>. In
these calculations, the dipole response of neutron-rich (n-rich)
nuclei exhibits a small component at energies lower than the standard
Giant Dipole Resonance (GDR), often depicted as the oscillation of a
deeply bound core against a neutron halo or skin, giving rise to a
so-called pygmy resonance. Such modifications of the response function
of nuclei have direct implications for astrophysics. The strong
influence of an – even small – percentage of E1 strength located
above particle threshold on neutron capture reactions has been studied
in <cit.>. More recently, the link between pygmy dipole
strength, neutron skin thickness and symmetry energy in asymmetric
nuclear matter, which has a strong impact on several neutron-star
properties has been stressed <cit.>. Experimentally,
the presence of low-lying dipole strength exhausting a sizable amount
of the Thomas-Reiche-Kuhn (TRK) energy weighted sum rule (EWSR) in
n-rich nuclei is now established. It was first revealed in light
drip-line nuclei in breakup reactions using high-Z targets
<cit.>. Later on, the non-resonant nature of the dipole
strength found in some light n-rich nuclei such as ^11Be was
stated <cit.>. In heavier nuclei, low-lying dipole
strength has been recently observed in Coulomb breakup experiments at
high energy performed at GSI on Oxygen <cit.> and
Tin <cit.> isotopes. In the latter case, an amount of
nearly 5% of the TRK sum rule has been measured at around 10 MeV
excitation energy in ^130,132Sn nuclei, in agreement with several
mean field models. Interestingly, conflicting interpretations are
provided by the quoted models concerning the microscopic structure of
these states. Within the relativistic quasi-particle random phase
approximation (QRPA) calculations <cit.>, relatively
collective pygmy states are predicted, while non-relativistic QRPA
including phonon coupling involves essentially individual transitions
<cit.>. No conclusion can be brought on the
microscopic structure of these states in the absence of other
observables than the strength distribution. Both approaches
nevertheless agree that the excitations are driven by the excess
neutrons.
In the present work we investigate low-lying dipole strength in the
26 isotope for which an important redistribution of dipole
strength as compared to the stable 20 is predicted by Cao and Ma
<cit.>. In this calculation, almost 5% of the TRK sum rule is
exhausted by a structure centered around 8.5 MeV. This region in
energy is located between the one-neutron and the two-neutron emission
threshold. We performed a Coulomb excitation experiment by bombarding
a lead target by 26 at intermediate energy, and used the
invariant mass method to reconstruct the B(E1) strength from the
26→25+n channel. After determining the strength
distribution, we extract for the first time neutron branching ratios
of the populated pygmy states to levels in the daughter nucleus
(25). These observables provide detailed insight on the
microscopic structure of the populated states, as long as the observed
decay mode is not statistical <cit.>.
The experiment was performed at the RIKEN Accelerator Research
Facility. A secondary 26 beam was produced through fragmentation
of a 95 MeV/u, 60 pnA primary beam on a 2-mm-thick target. The 26 fragments were separated by the RIKEN Projectile
Fragment Separator (RIPS) <cit.>. Beam particle
identification was unambiguously performed by means of the
time-of-flight (TOF) between the production target and the second
focal plane. The 80% pure 26 beam of intensity ∼5×10^3
pps and incident energy 58 MeV/u, was tracked with two parallel-plate
avalanche counters providing incident angle and hit position on the
reaction targets (alternatively 230 mg/cm^2 0 and 130 mg/cm^2
27). Data obtained with the 27 target are used in the
following to estimate the contribution of nuclear excitation to the
data.
The outgoing charged fragments were detected using a set of telescopes
placed at 1.2 m downstream of the target. They consisted of two layers
(X and Y) of 500 single-sided silicon strip detectors (SSD) with
5 mm strips which yielded an energy-loss resolution for neon isotopes
of 1.5 MeV (FWHM). An additional layer used 3 mm-thick Si(Li)
detectors made at the Institut de Physique Nucléaire d'Orsay. The
resolution on the remaining energy (E) was 9 MeV (FWHM). Unambiguous
mass and charge identification of all projectile-like fragments was
obtained using the E–ΔE method.
In-beam gamma rays were detected using the 4π gamma array DALI2
<cit.>, which consists of 152 NaI(Tl) detectors placed
around the target. For 1.3 MeV gamma-rays, the measured efficiency is
approximately 15% and the energy resolution is 7% (FWHM). The
Doppler corrected gamma energy distribution obtained in coincidence
with the 25 isotope allows us to identify the gamma decay from
the adopted 1702.7(7), 2030(50), and 3316.4(11) keV excited states.
The hodoscope for neutron detection was an array of 4 layers of 29
plastic rods each, placed 3.5 m downstream of the target.
The total intrinsic efficiency for the detection of 60 MeV
neutrons was calculated to be 25% <cit.>. Finally, 29
thin plastic scintillators covered the front face of the wall in order
to veto charged particles as well as to provide an active beam
stopper. The neutron position was determined with an error of
±3 cm and the energy, from TOF information, with a 2.5 MeV (FWHM)
resolution for the neutrons of interest.
A simulation of the experimental setup using the Geant 3
package <cit.> was performed in order to correct the data
for the experimental acceptance. Using this simulation, the angular
distribution for elastic scattering of 26 on 0 at 55 MeV/u
was obtained and compared with optical model calculations based on an
optical potential for the 20+0 system at
40 MeV/u <cit.>. Another check for both the simulation
and the optical potential used was provided by the 26
B(E2;0_1^+→2_1^+) <cit.>, extracted by
comparing the shape and the amplitude of the angular distribution of
the experimental inelastic scattering with the corresponding
theoretical calculation. For the 26+27 reaction, we
empirically generated optical potential parameters <cit.>
and compared the result with the experimental elastic scattering.
Using the invariant mass method, the excitation energy of an unbound
state in the ^AX nucleus decaying to a state in
^A-1X can be expressed by: E^* = E_rel +
S_n + ∑_i E_γ_i, where E_rel is
the relative energy between the neutron and the fragment
^A-1X, S_n the one neutron emission threshold,
and ∑_iE_i the summed energy of the gammas involved in the
subsequent decay of the daughter nucleus
^A-1X. The gamma detection efficiency was not high
enough to perform an event-by-event gamma calorimetry. Hence, our
reconstruction technique took into account independently the
population of every state of the ^A-1X daughter
nucleus <cit.>. This method has been
successfully tested on simulations. We also included the detector
resolutions and hence estimated the excitation energy resolution to be
0.8 MeV at E^* = 8 MeV.
The excitation energy spectra reconstructed for the 25+n decay
channel obtained with the Pb and Al targets are represented in
Fig. <ref>. Above 10 MeV, the decay of 26 is expected
to occur mainly by 2-neutron emission. Between 8 and 10 MeV, a sizable
amount of cross-section is observed for both targets. In intermediate
energy inelastic scattering with a heavy target such as Pb, the
spectrum is dominated by Coulomb excitation of E1 states. Conversely,
the dipole excitation is relatively low with a light target such as
Al. The contribution of possible E2 excitation to the spectrum
obtained with the lead target has been determined by using data taken
with the aluminum target and the coupled channels ecis97
code <cit.>. Assuming a simple collective vibrational mode
with equal nuclear and Coulomb deformation lengths, the E2 nuclear and
Coulomb deformation parameters were extracted from the measured
cross-section with the Al target (σ_Al =
9.1±2.3 mb). The corresponding L=2 cross section in lead was then
calculated using the deformation lengths extracted in the previous
step.
After subtraction of this E2 contribution, the resulting
σ_Pb^L=1=48.5±4.8 mb cross section corresponds to
a Coulomb deformation parameter β_C=0.087±0.008 which
leads to B(E1)=0.55±0.05 1 the following relation with
the Coulomb radius R_C:
B(E1;0^+→1^-)=(3/4πZ_p e R_Cβ^L=1_C)^2,
where Z_p is the projectile proton number. This value of reduced
transition probability corresponds to 5.5±0.6% of the TRK sum
rule for an excitation energy of 9 MeV.
The high granularity of the present setup, allows to reconstruct the
scattering angular distribution for 26 on the 0 target
(Fig. <ref>) and to extract the E1 excitation
by means of a multipole decomposition analysis. The L=1 and L=2
angular distributions (dotted and dashed lines) were obtained from
simulations based on ecis97 angular distribution calculated for
E^*=9 MeV. The data were fitted with a linear combination of the two
distributions since L=2 and L>2 distributions were found to
exhibit similar shapes. The result of the fit gives B(E1) =
0.49±0.16 1 which corresponds to 4.9±1.6% of the TRK
sum rule around 9 MeV excitation energy. Assuming that the remaining
part of the contribution is due to L=2 excitation, we obtain
B(E2↑) = 49±8 2. The two methods to extract the E1
component
thus provide very consistent results. A third method presented in
<cit.> also leads to the same conclusion. This shows that,
within the error bars, possible contribution of modes such as
isoscalar L=1 states to the data taken with the Al target does not
affect the final result on B(E1).
Theoretical calculations have been performed within various
frameworks. Using the relativistic QRPA (RQRPA) and the response
function formalism, Cao and Ma <cit.> predict an E1 pygmy state
centered around 8.4 MeV and exhausting 4.5% of the TRK sum rule,
close to our experimental values. RQRPA calculations for axially
deformed nuclei also report a pygmy state below 10 MeV excitation
energy <cit.>. No corresponding percentage of TRK sum
rule is reported. Similarly, a redistribution of the strength with a
peak at low energy is also predicted by (non-relativistic) deformed
QRPA calculations using Gogny forces <cit.> and Skyrme
forces <cit.>. All these calculations agree on the
presence of a structure at low excitation energy in the E1 response
function. In order to get deeper insight into the microscopic
structure of these states, one can examine the dominant configurations
involved, which can be extracted from the calculations of
refs. <cit.> performed in
the matrix formalism. In the first two calculations, the dominant
transitions are found to be 2s_1/2→2p_3/2 and/or
2s_1/2→2p_1/2, corresponding to the promotion of neutrons
essentially from the last occupied orbit of 26 to fp shells.
It is well-known that the decay pattern of continuum states can give
access to the components of the wave-function of these states. The
excitation energy reconstruction method used in the present experiment
<cit.> allows us to extract for the first time data on the
decay of pygmy resonances of neutron-rich nuclei. The experimental
branching ratios to bound states of 25 are presented in
Table <ref>. For both Pb and Al targets, the branching
ratio for the decay to the ground-state (g.s.) of 25 is
compatible with zero. By using the simulation code, it was checked
that any decay pattern including a sizable branch to the g.s. is
incompatible with the experimental results. The large difference
between branching ratios obtained with the two targets proves that
states of different nature have been excited. We then deduced the
L=1 and L=2 components on the Pb target by assuming that the Al
target induces only a L ≥ 2 excitation, as described above. For
comparison, we performed a statistical decay calculation assuming
L=1,2,3 emitting states using the cascade code
<cit.>, spins and parities of populated states in
25 being those listed in Table <ref>. The decay
is not statistical, as observed in light nuclei <cit.>. Since
the g.s. of 25 has J^π=1/2^+, the decay to this state becomes
weaker with increasing spin of the emitting state due to penetrability
effects.
A striking feature of the observed decay pattern is the absence of
decay to the 25 g.s., which is in contradiction with the
predicted structure of the pygmy states. Indeed, it is established
that the 25 g.s. configuration mainly corresponds to a neutron in
the 2s_1/2 orbit, the experimental spectroscopic factor obtained
from the 24(d,p) reaction being 0.8 <cit.>. If the
main configuration of the pygmy state were actually
ν(2s_1/2^-12p_3/2) or ν(2s_1/2^-12p_1/2) a
strong decay to the 25 g.s. should occur, even favoured by
penetrabilities. This discrepancy indicates that the populated pygmy
states are more mixed and/or involve different
transitions. Interestingly, calculations reported in
<cit.> predict a dominant contribution of the
K^π=1^- state, with nearly equal weights of
ν(2s_1/2^-12p_1/2) and ν(1d_5/2^-11f_7/2)
transitions which is in better qualitative agreement with our data.
Theoretical branching ratios, presently not available, are highly
desirable for a more precise comparison. We note that they could also
be obtained from shell-model calculations by combining the
single-particle spectroscopic factors and penetrability coefficients.
In summary the present study of the neutron rich nucleus 26 using
intermediate energy inelastic scattering has shown the presence of
pygmy states located around 9 MeV excitation energy. The contribution
of E1 states corresponds to nearly 5% of the TRK sum rule. These
global features are in agreement with self consistent mean-field
calculations performed in various frameworks. The decay pattern of
the observed pygmy states has been measured for the first time,
providing a stringent test of the microscopic models describing the
wave function of these states. The measured decay pattern is not
consistent with models predicting a structure corresponding to
excitations of neutrons from the Fermi surface. Making use of the new
facilities RIBF and Big RIPS, future studies will investigate nuclei
located even further from stability.
26
natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL
[Tanihata et al.(1985)]tanihata:halo
authorI. Tanihata
et al., journalPhys. Rev. Lett.
volume55, pages2676 (year1985).
[Suzuki et al.(1990)]suzuki:pdg
authorY. Suzuki
et al., journalProg. Theor. Phys.
volume83, pages180 (year1990).
[Van Isacker et al.(1992)]isacker:pdg
authorP. Van Isacker,
authorM. A. Nagarajan, and
authorD. D. Warner,
journalPhys. Rev. C
volume45, pagesR13 (year1992).
[Goriely et al.(2004)Goriely, Khan, and
Samyn]Goriely:2004
authorS. Goriely et al.,
journalNucl. Phys. A volume739,
pages331 (year2004).
[Klimkiewicz et al.(2007)Klimkiewicz,
Paar, Adrich, Fallot, Boretzky, Aumann, Cortina-Gil, Pramanik, Elze, Emling
et al.]Klim:2007:nskin
authorA. Klimkiewicz
et al., journalPhys. Rev. C
volume76, pages051603(R)
(year2007).
[Tanihata(1995)]tanihata:ppnp
authorI. Tanihata,
journalProgr. Part. Nucl. Phys. volume35,
pages505 (year1995), noteand refs.
therein.
[Fukuda et al.(2004)Fukuda
et al.]Fukuda:2004ty
authorN. Fukuda
et al., journalPhys. Rev. C
volume70 pages054606
(year2004).
[Leistenschneider et al.(2001)]leisten:oxygens:gdr
authorA. Leistenschneider
et al., journalPhys. Rev. Lett.
volume86, pages5442 (year2001).
[Adrich et al.(2005)]adrich:sn:pdr
authorP. Adrich et al.,
journalPhys. Rev. Lett. volume95,
pages132501 (year2005).
[Paar et al.(2003)Paar, Ring, Nikšši ćć, and
Vretenar]paar:prc67
authorN. Paar,
authorP. Ring,
authorT. Niksic, and
authorD. Vretenar,
journalPhys. Rev. C volume67,
pages034312 (year2003).
[Sarchi et al.
(2004)Sarchi, Bortignon, and
Colo]Sarchi04:dipole
authorD. Sarchi
et al., journalPhys. Lett. B
volume601, pages27
(year2004).
[Cao and Ma(2005a)]ma:prc
authorL.-G. Cao and
authorZ.-Y. Ma,
journalPhys. Rev. C volume71,
pages034305 (year2005a).
[van der Woude and Harakeh(2001)]vdw:book
authorM. N. Harakeh
and
authorA. van der Woude,
titleGiant Resonances: Fundamental High-Frequency Modes of
Nuclear Excitation (publisherOxford University Press,
year2001).
[Kubo et al.(1992)]kubo:rips
authorT. Kubo et al.,
journalNucl. Ins. and Meth. B volume70,
pages309 (year1992).
[Takeuchi et al.(2002)]takesato:dali2
authorS. Takeuchi
et al., journalRIKEN Accel. Prog. Rep.
volume36, pages148 (year2002).
[Fukuda(2004)]fukuda:thesis
authorN. Fukuda,
Ph.D. thesis, schoolUniversity of Tokyo
(year2004), noteand refs. therein.
[Brun et al.(1986)]brun:geant3
authorR. Brun et al.,
journalCERN DD/EE/84-1 (year1986).
[Suomijarvi et al.(1989)]beaumel:ne20
authorT. Suomijarvi
et al., journalNucl. Phys. A
volume491, pages314 (year1989).
[Gibelin
et al.(2007a)]gibelin:26ne:be2:
authorJ. Gibelin et al.,
journalPhys. Rev. C volume75,
pages057306 (year2007a).
[Gibelin(2005)]gibelin:phd
authorJ. Gibelin, Ph.D. thesis,
schoolParis XI University (year2005),
<http://hal.in2p3.fr/docs/00/05/88/11/PDF/thesis_gibelin.pdf>.
[Gibelin et al.(2007b)]gibelin:comex02
authorJ. Gibelin et al.,
journalNucl. Phys. A volume788,
pages153c (year2007b).
[Raynal(1997)]ecis:97
authorJ. Raynal,
journalUnpublished (year1997).
[Arteaga and Ring(2007)]arteaga2007rqr
authorD. Arteaga
and
authorP. Ring,
journalProg. in Part. and Nucl. Phys.
volume59, pages314
(year2007), noteand Private communication.
[Peru et al.(2007)Peru, Goutte, and
Berger]Peru:2007
authorS. Peru
et al.,
journalNucl. Phys. A volume788,
pages44 (year2007).
[Yoshida and Giai(2008)]Yoshida:2008:arx
authorK. Yoshida and
authorN. V. Giai,
journalarXiv:nucl-th volume0802,
pages1687 (year2008).
[Puhlhofer(1977)]Puhlhofer77:cascade
authorF. Puhlhofer,
journalNucl. Phys. A volume280,
pages267 (year1977).
[Fernandez-Dominguez
et al.(2007)]Fernandez2007
authorB. Fernandez-Dominguez
et al., journalProg. in Part. and
Nucl. Phys. volume59, pages389
(year2007).
|
http://arxiv.org/abs/2307.03985v1 | 20230708142437 | Spectroscopic Devices for Asteroseismology With Small Telescopes in NARIT | [
"Somsawat Rattanasoon",
"Eugene Semenko",
"David Mkrtichian",
"Saran Poshyachinda"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.IM"
] |
National Astronomical Research Institute of Thailand (Public Organization)
260 Moo 4, T. Donkaew, A. Maerim, Chiangmai, 50180 Thailand
Spectroscopic Devices for Asteroseismology With Small Telescopes in NARIT
Saran Poshyachinda
=========================================================================
National Astronomical Research Institute of Thailand (NARIT) has a manifold network of small telescopes installed worldwide. These telescopes serve educational and research purposes and are equipped mainly with CCD detectors for direct imaging and photometry. To extend the possible field of applications, several telescopes were fitted with commercially available medium-resolution spectrographs eShel from Shelyak. With these devices, researchers in NARIT obtained a versatile tool for stellar spectroscopy. Here we describe the current status of available equipment, possible ways of upgrading, and briefly introduce the achieved results of the asteroseismologic study of fast-rotating stars.
§ MOTIVATION
A fibre-fed medium-resolution echelle spectrograph eShel has been designed and distributed for small telescopes by Shelyak Instruments (France) since 2008 <cit.>. A typical device consists of a stationary spectrograph block linked by a fibre with 50 μm core to the Fibre Injection and Guiding Unit (FIGU) installed at the telescope side. FIGU is also connected through a 200-μm fibre channel to the Calibration Unit comprising halogen, LED, and ThAr lamps. Spectrograph and its components are commercially available on the company's website <https://www.shelyak.com/>.
Earlier models of eShel registered spectra within the wavelength range 430–700 nm with the resolution R > 10,000. In 2018, after the upgrade, which affected many components of eShel, the working range was significantly extended.
NARIT has a distributed network of small telescopes with apertures up to 1 m. For the spectroscopy of relatively bright stars, these telescopes can optionally be equipped with eShel. At the moment, NARIT has three devices with serial numbers 6H-115 (2010), 6H-128 (2016), and 6H-171 (2018). All spectrographs were acquired in their original complete set, thus having limited capabilities. To enable observations of fainter objects and to increase sensitivity in the blue part of the spectrum, we initiated a substantial upgrade of a device with SN 6H-171.
§ MODIFICATION AND TESTS
The improved device received a new high-OH fibre with enhanced throughput in the blue part of the spectrum, a new doublet collimator (Shelyak provided both components), a new imaging lens, and a professional-grade CCD. All components, except fibre, are shown in Fig. <ref>.
As a detector, we use a water-cooled Andor iKon-L system based on a 2048×2048 pixels CCD array with 13.5 μm pixel pitch. To match the plate scale to the increased pixel size, among several lenses with comparable focus lengths available in the market, we choose a commercial lens Sony FE 135 mm F1.8 GM, primarily due to its outstanding optical quality. Subsequent testing of the whole assembly also showed excellent transmission of the selected lens within the required range of wavelengths. The imaging lens is attached to the CCD camera through a specially designed adapter with an enclosed shutter.
Technical parameters of the original and upgraded versions of eShel are summarized in Table <ref>.
An upgraded variant of the spectrograph was installed for tests in a spectrograph room of the Thai National Observatory (TNO) at Doi Inthanon (Chiang Mai, Thailand) in a temperature-controlled environment. The FIGU was mounted to the left Nasmyth port of the 1-m telescope of TNO. Tests were performed in December 2022 and January 2023 under affordable weather conditions and were aimed at the verification of the optical performance of the assembly. Observational data include a standard set of calibrations (bias, flat, ThAr) and spectra of the selected stars and daytime sky. Two-dimensional raw FITS images were reduced using the pipeline PyYAP (<http://github.com/ich-heisse-eugene/PyYAP>), specially adapted to a new device.
§ RESULTS
Test images taken with the upgraded device showed remarkable aberrations arising from the misaligned optical elements of the spectrograph. As this problem appeared in the direction perpendicular to dispersion, it influenced the overall throughput and the level of scattered light. Still, it didn't affect the spectral resolution and transmission of the device. Thus we leave the evaluation of the total throughput and stability for future works and concentrate here primarily on studying these unaffected characteristics.
§.§ Transmission
Analysis of observational data revealed significantly improved spectrum quality due to better control of aberrations and enhanced transmission in the Sony lens. In the images, the point spread function remains nearly stable across the field of view in the 380-850 nm wavelength range. As a result, the shortwave limit of the working spectral range has been extended by 70 nm, from 450 nm to 380 nm. In the infrared, the working range of the current setup is limited by 900 nm. In Fig. <ref>, we show four samples of the observed spectrum of the daytime sky.
§.§ Resolving power
The resolving power of the modified eShel was evaluated by fitting the Gaussian function to the emission lines of the ThAr spectrum. This procedure is implemented as a standard step of processing in PyYAP.
Inspection of the ThAr spectra showed that the focus of the imaging camera remained stable during all observational nights. Within the spectrograph's working wavelength range, the resolving power R = λ/Δλ varied from 10,000 to 12,500, with the median R = 11,700 evaluated from 355 lines in a single image. The resolving power does not variate significantly between nights: the full width at half maximum (FHWM) of the mean ThAr line equals 3.7 pixels, close to the optimal sampling.
§ SCIENTIFIC APPLICATION
A medium-resolution fibre-fed spectrograph, in combination with a 1-meter class telescope, can be a powerful instrument for the spectroscopy of relatively bright sources. Literature has many examples of using eShel in stellar physics and the physics of the Solar system objects. Due to its compact design and high positional stability, this spectrograph appears even in the observations of the extrasolar planets. The thing is that the accuracy of the radial velocity measurements reported in <cit.>, <cit.>, and <cit.> was better than 100 m s^-1 for the stars brighter than 11 magnitudes and exposure time under one hour. Such characteristics enable the detection and observation of hot Jupiters around the brightest stars. <cit.> also gave the example of how to use eShel for observation of pulsating stars (cepheids).
The proposed upgrade opens new perspectives for the family of small telescopes in NARIT, as we have several spectrographs which, after the improvement, can be installed at any of our telescopes. In this way, it becomes possible to move part of scientific proposals aimed at studying exoplanets, active solar-like stars, binary and multiple stars from the main 2.4-m Thai National Telescope to smaller instruments without losing the efficiency and observing time. However, the main stimulus which led us to this technical work was the capability of using this device for asteroseismology of the brightest fast-rotating pulsating stars.
To demonstrate the efficiency of eShel in asteroseismological observations, in Fig. <ref>, we show an example of non-radial pulsations discovered in a 4-magnitude fast-rotating star. A typical pattern of waves propagating across the averaged spectral profile is in the left panel of <ref>. The right panel shows the 2D periodogram used for the identification of frequencies of pulsation. In this example, the star has been observed continuously with short exposures for more than five hours with the original version of eShel and the 1-m telescope of NARIT. The upgraded version of the spectrograph will allow us to increase the signal-to-noise ratio (SNR) of observational data and, thus, expand the number of potential targets or increase the temporal resolution of data with shorter exposure time preserving the same level of SNR.
§.§.§ ORCID identifiers of the authors
0000-0002-1912-1342Eugene Semenko
0000-0001-5094-3910David Mkrtichian
§.§.§ Author contributions
SR, ES, and DM are responsible for formulating the project, its technical implementation, and carrying out the observations. ES and DM are responsible for data reduction and analysis. SP contributed to the project administration. All authors equally contributed to the text of the article.
§.§.§ Conflicts of interest
The authors declare no conflict of interest.
bullsrsl-en
|
http://arxiv.org/abs/2307.07313v1 | 20230714124659 | HEAL-SWIN: A Vision Transformer On The Sphere | [
"Oscar Carlsson",
"Jan E. Gerken",
"Hampus Linander",
"Heiner Spieß",
"Fredrik Ohlsson",
"Christoffer Petersson",
"Daniel Persson"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
Minimum k-critical-bipartite graphs:
the irregular Case
Karol Suchan^2,1
August 12, 2023
========================================================
High-resolution wide-angle fisheye images are becoming more and more important for robotics applications such as autonomous driving. However, using ordinary convolutional neural networks or vision transformers on this data is problematic due to projection and distortion losses introduced when projecting to a rectangular grid on the plane. We introduce the HEAL-SWIN transformer, which combines the highly uniform Hierarchical Equal Area iso-Latitude Pixelation (HEALPix) grid used in astrophysics and cosmology with the Hierarchical Shifted-Window (SWIN) transformer to yield an efficient and flexible model capable of training on high-resolution, distortion-free spherical data. In HEAL-SWIN, the nested structure of the HEALPix grid is used to perform the patching and windowing operations of the SWIN transformer, resulting in a one-dimensional representation of the spherical data with minimal computational overhead. We demonstrate the superior performance of our model for semantic segmentation and depth regression tasks on both synthetic and real automotive datasets. Our code is available at <https://github.com/JanEGerken/HEAL-SWIN>.
§ INTRODUCTION
r0.49
< g r a p h i c s >
Our HEAL-SWIN model uses the nested structure of the HEALPix grid to lift the windowed self-attention of the SWIN model onto the sphere.
High-resolution fisheye cameras are among the most common and important sensors in modern intelligent vehicles <cit.>. Due to their non-rectilinear mapping functions and large field of view, fisheye images are highly distorted. The traditional approach for dealing with this kind of data is nevertheless to use standard (flat) convolutional neural networks which are adjusted to the distortions introduced by the mapping function and either preprocess the data <cit.> or deform the convolution kernels <cit.>. However, these approaches struggle to capture the inherent spherical geometry of the images since they operate on a flat approximation of the sphere. Errors and artifacts arising from handling the strong and spatially inhomogeneous distortions are particularly problematic in safety-critical applications such as autonomous driving. Furthermore, certain downstream tasks like 3D scene understanding inherently require spherical information, such as the angle under which an object is detected, which needs to be extracted from the predictions of the model.
In this paper we show that projecting spherical data to the plane and learning on distorted data, only to project the resulting predictions back onto the sphere, introduces considerable performance losses. We demonstrate this for standard computer vision tasks from the autonomous driving domain. We therefore argue that, for applications that require a very precise 3D scene understanding, computer vision models that operate on fisheye images should be trained directly on the sphere in order to avoid the significant losses associated with flat plane projections.
Optimizing on the sphere is an approach taken by some models <cit.>, which lift convolutions to the sphere. These models rely on a rectangular grid in spherical coordinates, namely the Driscoll–Healy grid <cit.>, to perform efficient Fourier transforms. However, this approach has several disadvantages for high-resolution data: First, the sampling in this grid is not uniform but much denser at the poles, necessitating very high bandwidths to resolve fine details around the equator. Second, the Fourier transforms in the aforementioned models require tensors in the Fourier domain of the rotation group SO(3) which scale with the third power of the bandwidth, limiting the resolution. Third, the Fourier transform is very tightly coupled to the input domain: If the input data lies only on a half-sphere as in the case of fisheye images, the definition of the convolutional layers would need to be changed to use this data efficiently.
As a novel way of addressing all of these problems at the same time, we propose to combine an adapted vision transformer with the Hierarchical Equal Area iso-Latitude Pixelisation (HEALPix) grid <cit.>. The HEALPix grid was developed for capturing the high-resolution measurements of the cosmic microwave background performed by the MAP and PLANCK satellites and features a uniform distribution of grid points on the sphere which assosciates the same area to each pixel. This is in contrast to most other grids used in the literature like the Driscoll–Healy grid or the icosahedral grid. Furthermore, due to its widespread use in the astrophysics- and cosmology communities, efficient implementations in C++ of the relevant computations are readily available as a Python package <cit.>.
In our model, which we call HEAL-SWIN, we use a modified version of the Hierarchical Shifted-Window (SWIN) transformer <cit.> to learn directly on the HEALPix grid. The SWIN transformer performs attention over blocks of pixels called windows which aligns well with the nested structure of the HEALPix grid (Figure <ref>). To distribute information globally, the SWIN transformer shifts the windows in every other layer, creating overlapping regions. We propose two different strategies for shifting windows in the HEALPix grid: Either aligned with the hierarchical structure of the grid or in a spiral from one pole to the other.
Besides their excellent performance, an additional benefit of using a transformer model is that the attention layers do not require a Fourier transform and can therefore easily deal with high-resolution data and with data that covers only part of the sphere, such as images captured by fisheye cameras, resulting in significant efficiency gains. However, our model is not specific to fisheye images but can be trained on any high-resolution spherical data, as provided for instance by satellites mapping the sky or the earth.
In order to verify the efficacy of our proposed model, we train HEAL-SWIN on computer vision tasks from the autonomous driving domain, namely depth estimation and semantic segmentation of fisheye camera images. In this domain, high-resolution input images and very precise outputs in terms of 3D information are critical. We show that it is beneficial to optimize on the sphere instead of projecting predictions in the plane onto the sphere to extract the 3D spatial information. To the best of our knowledge, we are the first to treat fisheye images in automotive applications as spherical signals.
We perform fisheye-image-based depth estimation and
compare the resulting predicted 3D point clouds to the corresponding ground truth point clouds.
In this way, we can assess how well our model captures the 3D information needed for downstream tasks. We show that our model outperforms the corresponding flat model (SWIN transformer) in this setting on the SynWoodScape dataset <cit.> of computer-generated fisheye images of street scenes (Figure <ref>). For the semantic segmentation task, we confirm the superior performance of our model on both the real-world WoodScape dataset <cit.> and the SynWoodScape dataset when performance is measured on the sphere (Figure <ref>). Finally, we investigate the performance advantage of our model for different amounts of data by training our segmentation models on subsets of SynWoodScape.
Our main contributions are as follows:
* We argue that in applications involving spherical data, it is essential to take the domain of downstream tasks into account when designing the network architecture. Therefore, we propose to leverage the inherent geometry in the data by training directly on the sphere in cases like depth estimation, where 3D spatial information is required.
* We construct the HEAL-SWIN model for high-resolution spherical data, which combines the spherical HEALPix grid with an adapted SWIN transformer by exploiting their similar hierarchical structures. In particular, we construct windowing and shifting mechanisms for the HEALPix grid which efficiently deal with data that only covers part of the sphere.
* We consider fisheye images in automotive applications for the first time as distortion-free spherical signals. We demonstrate the superiority of this approach for depth estimation and semantic segmentation on both synthetic and real automotive datasets.
§ RELATED WORK
The transformer architecture was introduced in <cit.>, extended to the Vision Transformer (ViT) in <cit.>, and further refined in <cit.> to the Shifted-Window (SWIN) transformer based on attention in local windows combined with window shifting to account for global structure. A first step towards spherical transformer models has been proposed in <cit.> where various spherical grids are used to extract patches on which the ViT is applied, and in <cit.> where icosahedral grid sampling is combined with the Adaptive Fourier Neural Operator (AFNO, see <cit.>) architecture for spatial token mixing to account for the geometry of the sphere. Compared to previous spherical transformer designs, such as <cit.>, our model constitutes a significant improvement by incorporating local attention and window shifting to accommodate high-resolution images in the spherical geometry, without requiring the careful construction of accurate graph representations of the spherical geometry.
The SWIN paradigm has been incorporated into transformers operating on 3D point clouds (e.g. LiDAR depth data) by combining voxel based models (see <cit.>) with sparse <cit.> and stratified <cit.> local attention mechanisms. In <cit.>, spherical geometry is used to account for long range interactions in the point cloud to create a transformer based on local self-attention in radial windows. Closer in spirit to our approach is <cit.>, where the point cloud is projected to the sphere, and partitioned into neighborhoods. Local self-attention and patch merging is then applied to the corresponding subsets of the point cloud, and shifting of subsets is achieved by rotations of the underlying sphere. Our HEAL-SWIN model inherits windowing and shifting from the SWIN architecture, but in contrast to the point cloud based approaches we handle data native to the sphere and use the HEALPix grid to construct a hierarchical sampling scheme which minimizes distortions, allows for the handling of high-resolution data, and, in addition, incorporates new shifting strategies specifically adapted to the HEALPix grid.
Several Convolutional Neural Network (CNN) based models have been proposed to accommodate spherical data. As mentioned above, <cit.> use an equirectangular grid and implement convolutions in Fourier space, while <cit.> applies CNNs directly to the HEALPix partitions of the sphere. Other works that consider spherical CNNs combined with the HEALPix grid include <cit.>.
Compared to previous works combining CNNs and the HEALPix grid, the transformer architecture equips our model with the ability to efficiently encode long-range interactions.
§ HEAL-SWIN
We propose to combine the SWIN-transformer <cit.> with the HEALPix grid <cit.> resulting in the HEAL-SWIN-transformer which is capable of training on high-resolution images on the sphere. In this section, we describe the structure of the HEAL-SWIN model in detail.
§.§ Background
The SWIN-transformer is a computationally efficient vision transformer which attends to windows that are shifted from layer to layer, enabling a global distribution of information while mitigating the quadratic scaling of attention in the number of pixels.
In the first layer of the SWIN-transformer, squares of pixels are joined into tokens called patches to reduce the initial resolution of the input images. Each following SWIN-layer consists of two transformer blocks which perform attention over squares of patches called windows. The windows are shifted along the patch-grid axes by half a window size, before the attention for the second transformer block is computed. In this way, information is distributed across window boundaries. To down-scale the spatial resolution, two-by-two blocks of patches are periodically merged.
An important detail in this setup is that at the boundary of the image, shifting creates partially-filled windows. Here, the SWIN-transformer fills up the windows with patches from other partially-filled windows and then performs a masked version of self-attention which does not attend to pixel pairs which originated from different regions of the original.
For the depth-estimation and segmentation tasks we consider in this work, we use a UNet-like variant of the SWIN-transformer <cit.> which extends the encoding layers of the original SWIN-transformer by corresponding decoding layers connected via skip connections. The decoding layers are identical to the encoding layers, only the patch merging layers are replaced by patch expansion layers which expand one patch into a two-by-two block of patches such that the output of the entire model has the same resolution as the input.
In the HEAL-SWIN-transformer, the patches are not associated to an underlying rectangular pixel grid as in the original SWIN-transformer, but to the HEALPix grid on the sphere. The HEALPix grid (Figure <ref>) is constructed from twelve equal-area quadrilaterals of different shapes which tessellate the sphere and are subdivided along their edges times to yield a high-resolution partition of the sphere into 12·^2 equal-area, iso-latitude quadrilaterals, as illustrated in Figure <ref>. To allow for a nested (hierarchical) grid structure, needs to be a power of two. The pixels of the grid are then placed at the centers of the quadrilaterals. The resulting positions are sorted in a list either in the nested ordering descending from the iterated subdivisions of the base-resolution quadrilaterals or in a ring ordering which follows rings of equal latitude from one pole to the other. Given this data structure, we use a one-dimensional version of the SWIN-UNet which operates on these lists. For retrieving the positions of the HEALPix pixels at a certain resolution, translating between the nested- and ring indexing and interpolating in the HEALPix grid, we use the Python package <cit.>.
Since for our experiments, we consider images taken by fisheye cameras which cover only half of the sphere, we use a modification of the HEALPix grid, where we only use the pixels in eight out of the twelve base-resolution quadrilaterals which we will call base pixels. These cover approximately half of the sphere and allow for an efficient handling of the input data, in contrast to many methods used in the literature which require a grid covering the entire sphere. The restriction to the first eight base pixels is performed by selecting the first 812 entries in the HEALPix grid list in nested ordering.
§.§ Patches and windows
The nested structure of the HEALPix grid aligns very well with the patching, windowing, patch-merging and patch-expansion operations of the SWIN-UNet model. Correspondingly, the modifications to the SWIN transformer amount to making it compatible with the one-dimensional structure of the HEALPix grid instead of the two-dimensional structure of the original rectangular pixel grid, at minimal computational overhead.
The input data is provided in the nested ordering described in Section <ref> above. Then, the patching of the input pixels amounts to joining n consecutive pixels into a patch, where n is a power of four. Due to the nested ordering and the homogeneity of the HEALPix grid, the resulting patches cover quadrilateral areas of the same size on the sphere. Similarly, to partition patches into windows over which attention is performed, n consecutive patches are joined together, where n is again a power of four. The patch merging layers for downscaling and the patch expansion layer for upscaling operate in the same spirit on n=4^k consecutive patches in the HEALPix list.
§.§ Shifting
As mentioned above, to distribute information globally in the image, the SWIN transformer shifts the windows by half a window size along both image axes in every second attention layer. We have experimented with two different ways of performing the shifting in the HEALPix grid.
The most direct generalization of the shifting in the pixel grid of the original SWIN-transformer is a shifting in the HEALPix grid along the axes of the quadrilaterals of the base-resolution pixels, cf. Figure <ref> (left). We call this grid shifting. Similarly to the original SWIN shifting scheme, there are boundary effects at the edge of the half sphere covered by the grid. Additionally, due to the alignment of the base pixels relative to each other, the shifting necessarily clashes at some base-pixel boundaries in the bulk of the image. As in the original SWIN transformer, both of these effects are handled by reshuffling the problematic pixels to fill up all windows and subsequently masking the attention mechanism to not attend to pixel pairs which originate from different regions of the sphere.
In the spiral shifting scheme, we first convert the nested ordering into a ring ordering and then perform a roll operation on that list. Finally, we convert back to the nested ordering. In this way, windows are shifted along the azimuthal angle, with slight distortions which grow larger towards the poles due to the decreased length of circles of constant latitude, cf. Figure <ref> (right). As in the grid shifting scheme, we encounter boundary effects. In the spiral shifting, they occur at the pole and at the boundary of the half sphere covered by the grid. These effects are again handled by reshuffling the pixels and masking the attention mechanism appropriately. In this scheme, there are no boundary effects in the bulk of the image.
Both shifting strategies can be implemented as precomputed indexing operations on the list holding the HEALPix features and are therefore efficient. In ablation studies we found that the spiral shifting outperforms the grid shifting slightly.
§.§ Relative position bias
In the SWIN-transformer, an important component that adds spatial information is the relative position bias which is added to the query-key product in the attention layers. This bias is a learned value which depends only on the difference vector between the pixels in a pixel pair, i.e. all pixel pairs with the same relative position receive the same bias contribution.
In the HEALPix grid, the pixels inside each base quadrilateral are arranged in an approximately rectangular grid which we use to compute the relative positions of pixel pairs for obtaining the relative position bias mapping. We share the same relative position bias table across all windows, so in particular also across base quadrilaterals.
We also experimented with an absolute position embedding after the patch embedding layer but observed no benefit for performance.
§ EXPERIMENTS
To verify the performance of our model, we trained the HEAL-SWIN and the SWIN transformer on challenging realistic datasets of fisheye camera images from the autonomous driving domain. We show that our model reaches better predictions for semantic segmentation and depth estimation on both datasets we tried.
§.§ Datasets
We evaluate our models using the SynWoodScape <cit.> dataset of 2000 fisheye images from synthetic street scenes generated using the driving simulator CARLA <cit.> and the real-world WoodScape dataset <cit.> consisting of 8234 fisheye images recorded in various environments in the US, Europe and China. We use the pixel-wise classification labels contained in both datasets for semantic segmentation and the pixel-wise depth values in SynWoodScape for depth regression. For SynWoodScape, the ground truth labels are pixel-perfect and in particular provide dense depth maps, allowing for a clean comparison of model performances. For WoodScape however, we noticed inconsistencies in the semantic labels and no dense depth values are available. Further details can be found in Appendix <ref>.
In both datasets, the images are presented as flat pixel grids together with calibration data which allows for a projection onto the sphere. In order to project the data to the HEALpix grid, we use the <cit.> package to get the azimuthal and polar angles of the grid points inside the 812 base pixels which roughly cover half of the sphere. These angles are then projected to the plane using the calibration information and the data is resampled to the projected points. For the images, we use bilinear interpolation for the resampling, while the ground truths are resampled with a nearest neighbor interpolation.
Although using only eight out of the twelve base pixel of the HEALPix grid allows for an efficient representation of the fisheye images, some image pixels are projected to regions outside of the coverage of our subset of the HEALPix grid; see hatched regions in Figure <ref>. However, the affected pixels lie at the corners of the image, making the tradeoff well worth it for the autonomous driving tasks considered here. We restrict evaluation to the eight base pixels, as detailed in section <ref> for depth estimation and section <ref> for semantic segmentation.
For the SWIN transformer, we rescale the input images to a size of 640×768, corresponding to around 492k pixels, for HEAL-SWIN, we use =256, corresponding to roughly 525k pixels.
§.§ Models
For both the depth estimation and the semantic segmentation task, we compare the performance of HEAL-SWIN on the HEALPix images to that of the SWIN transformer on the original, i.e. flat and distorted fisheye images. The architecture and training hyperparameters were fixed by ablation studies for semantic segmentation on WoodScape unless stated otherwise. Furthermore, we use the same SWIN and HEAL-SWIN models for both semantic segmentation and depth estimation, and change only the number of output channels in the final layer.
As a baseline, we use the SWIN transformer in a 12-layer configuration similar to the “tiny” configuration SWIN-T from the original paper <cit.> with a patch size of 2×2 and a window size of 8× 8, adapted to the size of our input images. Since both tasks require predictions of the same spatial dimensions as the input, we mirror the SWIN encoder in a SWIN decoder and add skip connections in a SWIN-UNet architecture <cit.>, resulting in a model of around 41M parameters. We found the improved layer-norm placement and cosine attention introduced in <cit.> to be very effective and use them in all our models.
For the HEAL-SWIN models, we use the same configurations as for the SWIN model with a (one-dimensional) patch size of 4, mirroring the 2× 2 on the flat side and a window size of 64, mirroring the 8× 8 on the flat side. For shifting, we use the spiral shifting introduced in Section <ref> with a shift size of 4, corresponding roughly to half windows. Again, we mirror the encoder and add skip connections to obtain a UNet-like architecture. Patch merging reduces the number of pixels by a factor of four leading to four windows per base pixel in the bottleneck. A table with the spatial feature dimensions throughout the network can be found in Appendix <ref>. The shared model configuration gives our HEAL-SWIN model the same total parameter count as the SWIN model.
For the semantic segmentation task we train all models on four Nvidia A40 GPUs with an effective batch size of 8 and a constant learning rate of about 9.4×10^-4. For the depth estimation task we used an effective batch size of 4 and learning rates of 5× 10^-3 and 5× 10^-5 for the HEAL-SWIN and SWIN models respectively, chosen from the best performing models after a learning rate ablation.
§.§ Depth estimation
Estimating distances to obstacles and other road users is an important task for 3d scene understanding and route planning in autonomous driving. In depth estimation, pixel-wise distance maps are predicted from camera images. For the SynWoodScape dataset, pixel-perfect ground truth depth maps are available on which we train our models, adjusted to one output channel.
The model is trained using an L_2 loss and the depth data is standardized to have zero mean and unit variance; in addition the sky is masked out during training and evaluation. In order to preserve the common high-contrast edges all resampling is done using nearest neighbor interpolation.
In order to capture the quality of 3D scene predictions of the different models, we evaluate the depth estimations in terms of point clouds. More specifically, we generate a point cloud from the ground truth depth values by computing azimuthal and polar angles for each pixel from the calibration information of the camera and scaling the corresponding vectors on the unit sphere with the depth values. An example of the resulting point cloud is displayed in Figure <ref> (right). Similarly, the SWIN predictions are transformed into a point cloud, as are the HEAL-SWIN predictions, for which we use the pixel positions in HEALPix for the spherical angles. Comparing the predicted point clouds to the full-resolution ground truth point cloud leads to an evaluation scheme which is sensitive to the 3D information in the predicted depth values, which is essential for downstream tasks.
To compare the predicted point cloud P_pred to the ground truth point cloud P_gt in a symmetrical way, we use the Chamfer distance <cit.> defined by
CD(P_pred, P_gt)=1/|P_pred|∑_p∈ P_predmin_p'∈ P_gtd(p,p')^2 + 1/|P_gt|∑_p∈ P_gtmin_p'∈ P_predd(p,p')^2 ,
where d(p,p') is the Euclidean distance between p and p'. As mentioned in Section <ref>, some pixels in the corners of the image are lost when converted to the HEALPix grid. In order to ensure a fair comparison, we mask the pixels unavailable to the HEAL-SWIN model in the loss of the SWIN model and exclude them from P_pred and P_gt.
The resulting depth estimation Chamfer distances are plotted in Figure <ref>.[For completeness, we want to mention that one HEAL-SWIN run performed very differently from the others with a Chamfer distance of 6.784. According to Chauvenet's criterion this run should be classified as an outlier and is therefore not included in the figure.] From this there is a clear signal that the point clouds predicted by HEAL-SWIN match the ground truth point cloud better than the point cloud predicted by the SWIN transformer, indicating that the HEAL-SWIN model has indeed learned a better 3D representation of the spherical images.
§.§ Semantic segmentation
Semantic segmentation is a pixel-wise classification task for which we train HEAL-SWIN and SWIN on ground truth segmentation masks from WoodScape and SynWoodScape. For WoodScape, we train on the ten classes for which segmentation masks are directly available in the provided dataset. For SynWoodScape, we use two different subsets out of the 25 classes provided in the dataset. All excluded classes are mapped to void. In the first subset, which we call Large SynWoodScape, we train on 8 classes which cover large areas of the image, like building, ego-vehicle, road etc. obtaining a dataset which lacks a lot of fine details and hence minimizes projection effects between the flat projection and HEALPix. For the second subset, Large+AD SynWoodScape, we include further classes relevant to autonomous driving, like pedestrian, traffic light and traffic sign to create a more realistic dataset of 12 classes which also features finer details; see Figure <ref> for a sample. Further details about the datasets can be found in Appendix <ref>.
We adjust the number of output channels in the base HEAL-SWIN and SWIN models described in Section <ref> to the number of classes and train with a weighted pixel-wise cross-entropy loss. We choose the class weights w_i to be given in terms of the class prevalences n_i by w_i=n_i^-1/4.
r0.5
< g r a p h i c s >
Semantic Segmentation for varying training set sizes. Performance is measured as the mean intersection over union (higher is better) computed on the HEALPix grid (spherical mIoU).
As we argued in the introduction, for downstream tasks like collision avoidance, spatial information, such as the angle relative to the driving direction under which a pedestrian is detected, is essential. This information is obtained from the segmentation mask on the sphere as directly predicted by HEAL-SWIN. As summarized in Table <ref>, the HEAL-SWIN predictions are considerably more accurate than the SWIN predictions projected onto the sphere.
For reference, we also evaluated both models on the distorted segmentation masks on the flat grid, projecting the predicted HEAL-SWIN masks, as summarized in Appendix <ref>. Note that on WoodScape, we compute the mIoU in a different way from the 2021 CVPR competition <cit.> to account for the flaws in the metric used there, as discussed in more detail in Appendix <ref>. We clearly see that on the sphere, the HEAL-SWIN model performs best for all datasets, whereas SWIN loses a considerable amount of performance in the projection step onto the sphere.
We also study the effects of the size of the dataset used in training. Since we have limited data, we simulate different data set sizes by using different subsets of the training data. We train on the SynWoodScape data with the Large+AD SynWoodScape class subset. Within a run, we use exactly the same subset for both models, HEAL-SWIN and SWIN, while strictly increasing the subset when moving to a larger training set. All models are trained entirely from scratch until convergence and evaluated using the entire validation set. We find that the HEAL-SWIN model can make better use of larger training sets than the SWIN model, as the difference in performance becomes larger the more training data is used as shown in Figure <ref>.
§ CONCLUSION
High-resolution spherical data is ubiquitous in application domains ranging from robotics to climate science. Safety-critical downstream tasks based on fisheye cameras, such as collision avoidance in autonomous driving, require precise 3D spatial information to be extracted from 2D spherical images, and we therefore proposed to incorporate the geometry without projection distortions by training with images directly on the sphere. To implement this idea, we constructed the efficient vision tranformer HEAL-SWIN, combining the HEALPix spherical grid with the SWIN transformer. We showed superior performance of our HEAL-SWIN model on the sphere, in comparison to a comparable SWIN model on flat images, for depth estimation and semantic segmentation on automotive fisheye images.
Although showing high performance already in its present form, HEAL-SWIN still has ample room for improvement. Firstly, in the presented setup, the grid is cut along base pixels to cover half of the sphere, leaving parts of the image uncovered while parts of the grid are unused. This could be improved by descending with the boundary into the nested structure of the grid and adapting the shifting strategy accordingly. Secondly, the relative position bias currently does not take into account the different base pixels around the poles and around the equator. This could be solved by a suitable correction deduced from the grid structure. Thirdly, the UNet-like architecture we base our setup on, is not state-of-the-art in tasks like semantic segmentation. Adapting a modern vision transformer decoder head to HEALPix could boost performance even further. Finally, our setup is not yet equivariant with respect to rotations of the sphere. For equivariant tasks like semantic segmentation or depth estimation, a considerable performance boost can be expected from making the model equivariant <cit.>. In this context it would also be very interesting to investigate equivariance with respect to local transformations. This has been thoroughly analyzed for CNNs in <cit.> and a gauge equivariant transformer has been proposed in <cit.>.
We are very grateful to Jimmy Aronsson for valuable discussions and for collaborations in the initial stages of this project.
The work of O.C., J.G. and D.P. is supported by the Wallenberg AI, Autonomous Systems and Software Program
(WASP) funded by the Knut and Alice Wallenberg Foundation. D.P. is
also supported by the Swedish Research Council, and J.G. was supported by the Berlin Institute for the Foundations of Learning and Data (BIFOLD) and by the German Ministry for Education and Research (BMBF) under Grants 01IS14013A-E, 01GQ1115, 1GQ0850, 01IS18025A and 01IS18037A for part of this project.
Heiner Spieß' work has been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2002/1 “Science of Intelligence” – project number 390523135.
The computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) and the Swedish National Infrastructure for Computing (SNIC) at C3SE partially funded by the Swedish Research Council through grant agreements no. 2022-06725 and no. 2018-05973.
utphys
Appendix to
HEAL-SWIN: A Vision Transformer On The Sphere
In this appendix to the article “HEAL-SWIN: A Vision Transformer On The Sphere”, we provide further details about the datasets and models used in the experimental section.
§ DATASETS
For our experiments we use the WoodScape <cit.> and the SynWoodScape <cit.> dataset. Note that we used the 2k samples which were published at <https://drive.google.com/drive/folders/1N5rrySiw1uh9kLeBuOblMbXJ09YsqO7I> at the time of writing, instead of the full 80k samples. In all experiments, we split the available samples randomly (but consistently across models and runs) into 80% training data and 20% validation data.
For the semantic segmentation task, we use the 10 classes for which semantic masks are provided in the WoodScape dataset and two different subsets of the 25 classes for the SynWoodScape dataset. Table <ref> shows the relation between the 25 original classes and the classes in our two subsets. See Figure <ref> for the class prevalences in the different datasets. Examples for the inconsistent semantic masks in the WoodScape dataset mentioned in the main text can be found in Figure <ref>.
In the 2021 CVPR competition for segmentation of the WoodScape dataset <cit.>, pixels which are labeled with the dominant void class in the ground truth were excluded from the mIoU used for ranking. Therefore, many teams excluded the void class from their training loss, resulting in random predictions for large parts of the image. This shortcoming was noted in <cit.>, but the evaluation score could not be changed after the competition had been published. Given these circumstances, we decided to include the void class into our training loss but exclude it from the mean over classes in the mIoU to more accurately reflect the performance of our models on the more difficult classes. However, this also means that our results cannot be directly compared to the results of the competition.
The same problem does not arise for the SynWoodScape dataset and our two variants since their class lists include all major structures in the image, leading to a much reduced prevalence of the void class. Therefore, we include all classes in the mIoU for these datasets.
§ SWIN AND HEAL-SWIN MODELS
In Table <ref> we provide further details on the spatial size of the features throughout the HEAL-SWIN model used in the experiments discussed in Section <ref>.
§ HEAL-SWIN VERSUS SWIN FOR FLAT SEGMENTATION
In Table <ref>, we show the results of evaluating the segmentation models discussed in Section <ref> on the plane. In this case, the HEAL-SWIN predictions are projected onto the pixel grid of the SWIN predictions before evaluation. To ensure a fair comparison, the flat mIoU is calculated on a masked region of this grid, removing pixels which lie outside of the (restricted) HEALPix grid we use.
|
http://arxiv.org/abs/2307.05646v1 | 20230711124328 | Better Handling Coreference Resolution in Aspect Level Sentiment Classification by Fine-Tuning Language Models | [
"Dhruv Mullick",
"Bilal Ghanem",
"Alona Fyshe"
] | cs.CL | [
"cs.CL"
] |
Probabilistic Operational Correspondence (Technical Report)
Anna Schmitt
1
0000-0001-6675-2879
Kirstin Peters
2
0000-0002-4281-0074
August 12, 2023
================================================================================
Customer feedback is invaluable to companies as they refine their products. Monitoring customer feedback can be automated with Aspect Level Sentiment Classification (ALSC) which allows us to analyse specific aspects of the products in reviews. Large Language Models (LLMs) are the heart of many state-of-the-art ALSC solutions, but they perform poorly in some scenarios requiring Coreference Resolution (CR). In this work, we propose a framework to improve an LLM's performance on CR-containing reviews by fine tuning on highly inferential tasks. We show that the performance improvement is likely attributed to the improved model CR ability. We also release a new dataset that focuses on CR in ALSC.
§ INTRODUCTION
To understand an end user's perspective on a product, it is common to consider reviews on online platforms. A company can look for the customers' perspective on a certain aspect of the product. For instance, a laptop company might look for reviews concerning "battery." Aspect Level Sentiment Classification (ALSC) analyzes reviews for sentiments of specific aspects, like the "battery" aspect in earlier example <cit.>. ALSC is a sub-task of a wider body of work called Aspect Based Sentiment Analysis (ABSA) <cit.>, which aims to extract aspects and their associated sentiments. State-of-the-art ALSC solutions often use Large Language Models (LLMs) <cit.>.
Reviews often use pronouns, which can make coreference resolution (CR) in LLMs necessary to infer the sentiment associated with the aspect. Hence, LLMs used for ALSC need strong CR ability, and can fail otherwise. For instance, the sentence - "He ate food at the restaurant, it was deserted." requires the LLM to understand that the definite pronoun "it" refers to the "restaurant" (antecedent), because of the context ("deserted"). Table <ref> shows four examples where the state-of-the-art T5 ALSC model <cit.> fails due to its poor CR ability. We find that ~15% of this T5 model's errors are on cases requiring CR ability.
LLMs are also known to have performance and stability issues <cit.>. To remedy these, instead of directly training on the task of interest (target task), it can be beneficial to first train on an auxiliary task <cit.>. Certain auxiliary tasks can contribute to both improved performance and stability of the target task <cit.>. Using auxiliary training, our work shows a way to improve an LLM's performance on English ALSC reviews requiring CR.
In our work, we: a) show that an LLM trained for ALSC makes more errors when evaluated only on reviews requiring CR ability, compared to when handling typical ALSC reviews (8.7% mean F1); b) demonstrate that our framework for handling CR-containing reviews can improve ALSC model's CR ability (16% mean F1); c) show that this improved CR ability can improve ALSC performance for reviews requiring CR ability (5% mean F1). d) release annotated variants of existing datasets which can be used to benchmark a model's ALSC performance on CR cases.
§ EXPERIMENTAL SETUP
§.§ Data
Original ALSC Datasets We consider English ALSC datasets: SemEval Restaurant (Rest16) <cit.> and MAMS <cit.>, both of which contain reviews from a similar restaurant domain. Inspired by <cit.>, ALSC reviews are processed into an input format suitable for our LLM - "[sentence]. aspect: [aspect]". The ground truth output is "positive", "negative" or "neutral". For example, "$20 for good sushi cannot be beaten. aspect: sushi" has the ground truth as "positive". We clean datasets as per Appendix <ref>.
CR Cases
We identify reviews in the Rest16 and MAMS datasets that contain definite pronouns, and henceforth call these sentences Pronoun cases.
Limiting ourselves to the ALSC task described above, we say that a review is a CR case if its sentiment requires proper coreference resolution for correct classification. Specifically, the aspect should be an antecedent of a definite pronoun which is associated with a sentiment polarity. For example, "He ate food at the restaurant, it was deserted." with aspect: "restaurant" is a CR case. Here, "restaurant" is the antecedent of "it" which is associated with "deserted" and has negative connotations. CR cases are manually selected from Pronoun cases.
ALSC-CR Dataset Our dataset is composed of the original ALSC datasets (Rest16 and MAMS). The testing, however, is done only using CR cases, and we use a combination of Pronoun and Non-Pronoun cases for validation and train sets. Table <ref> presents the dataset composition. Better performance on the test dataset will indicate a superior ability to handle CR cases in ALSC.
The train, validation and test sets are of similar, but not identical, distributions. Due to the limited number of CR cases, it is not possible to have train and validation sets composed entirely of CR cases. More details can be found in Appendix <ref>.
§.§ Auxiliary Tasks
We use highly inferential tasks for auxiliary training in our experiments as they generally provide higher improvements for various NLP target tasks <cit.>. We select two commonsense tasks - Commongen <cit.> and CosmosQA <cit.>, as commonsense reasoning helps with CR <cit.>. SQuAD <cit.> is selected because it is a non-commonsense question answering (QA) task. Its performance is contrasted with CosmosQA, checking if it is the QA or the commonsense ability which improves CR. Quora Question Prediction <cit.> (QQP) is selected as it benefits performance on the Stanford Sentiment Treebank (SST) task which is similar to ALSC <cit.>. Even if auxiliary tasks aren't designed for CR, they can impart CR ability to the model. For the QA example - “Context: Alice can't come. She is old”; “Question: Who is old?”, answer is “Alice”. Answering this requires CR and teaches the model CR ability.
Commongen is a generative commonsense task involving generation of a plausible sentence given a list of concepts (train size = 67,389). It tests: 1) relational reasoning which is the ability to construct grammatical sentences adhering to commonsense; 2) compositional generalisation which is reasoning with unseen concept combinations. For example: input - "concepts = [dog, frisbee, catch, throw]"; output - "A dog leaps to catch a thrown frisbee."
CosmosQA is a QA task where answering questions requires commonsense (train size = 25,262). For each question, there are four options, and the model should output the correct option number.
SQuAD is an extractive QA task where the correct answer to the question is present exactly in the passage (train size = 87,599).
QQP task involves checking if two Quora questions are semantically equivalent. We cap the train size at 50,000 to match the other datasets.
§ EXPERIMENTS AND RESULTS
We ran experiments for three purposes: a) to show there is drop in ALSC performance for reviews requiring CR ability; b) to show we can alleviate this performance drop by auxiliary fine-tuning; c) to provide additional evidence that change in performance on CR cases is due to improved CR ability.
Inspired by state-of-the-art performance in <cit.>, we used the T5 LLM <cit.>. Our baseline model is a T5 trained on ALSC-CR, but not fine-tuned on auxiliary tasks.
The T5 model was trained in various settings using training prompts / input prefixes (Appendix <ref>). Wording of prompts has limited impact on the outcome so we did not experiment with the wording <cit.>. Rather, we relied on prior work for task prompts <cit.>. For ALSC and Definite Pronoun Resolution (DPR) <cit.> (Sec. <ref>), we created prompts as we did not find examples in prior work (see Appendix <ref>).
All experiments were run with at least 10 random seeds, and Yuen-Welch test was used for testing statistical significance.
§.§ Model Performance on ALSC Without Auxiliary Fine Tuning
To check LLM performance on CR cases, we evaluated the T5 model on regular ALSC data (ALSC-Regular), which does not consist solely of CR cases. ALSC-Regular and ALSC-CR are equal sized and have an identical proportion of Rest16 and MAMS. We also evaluated the T5 model on ALSC-CR, to get the model's performance solely on CR cases.
By comparing T5 model's performance on the two ALSC datasets, we show that unspecialized LLMs face a significant performance problem while handling reviews requiring CR ability. Results are shown in Table <ref>, where evaluation on ALSC-CR shows a drop in performance of ~8.7% mean F1, as well as an increase of 0.6 F1 standard deviation indicating a poorer model convergence.
§.§ Fine Tuning With Auxiliary Tasks
As a solution to poor performance on ALSC-CR (Section <ref>), we experimented with various auxiliary tasks mentioned in Section <ref>.
We trained T5 model on the auxiliary task first to incorporate auxiliary task knowledge. This model is then trained and evaluated on ALSC-CR, our target task. We experimented with different auxiliary dataset sizes as the size has little correlation with the target task performance <cit.>.
The model's performance on ALSC-CR with different auxiliary tasks is compared to baseline model's ALSC-CR performance to see if auxiliary tasks were beneficial. Results are shown in Table <ref>. We find that the lower ALSC-CR performance (compared to ALSC-Regular) can be alleviated by auxiliary training with Commongen and QQP, which lead to statistically significant improvements of ~5% mean F1. Auxiliary training with CosmosQA and SQuAD does not lead to statistically significant improvement in any case.
Prior work <cit.> showed a general improvement in a model's target task performance when fine-tuned with highly inferential tasks. Apart from being highly inferential, because Commongen is a generative commonsense task, it is ideal for imparting commonsense knowledge to a generative LLM like T5. On the other hand, CosmosQA being a discriminative task is unlikely to impart as much commonsense knowledge into a generative system <cit.>. As being highly inferential is helpful for target tasks, the SQuAD extractive QA task, would not result in as significant an improvement. When used for auxiliary training, QQP shows a high improvement in the SST target task <cit.> which involves similar sentiment analysis, explaining QQP's improved performance on ALSC-CR.
Though auxiliary training on DPR might seem promising, it is a much smaller dataset (train size = 1500) than other tasks. For completeness we did train using DPR but found that the mean F1 = 72.77 was not statistically significantly different from the baseline.
Similar to <cit.>, we do not find correlation between auxiliary task size and target performance. This lack of correlation can be since small datasets might not teach the task sufficiently <cit.>. On the other hand, large auxiliary datasets can cause catastrophic forgetting of the LLM's original objective <cit.>. This original objective is generally beneficial for target tasks. Despite this lack of correlation, we have demonstrated a framework for improving any target task's performance on CR cases.
We show a pronoun error analysis in Appendix <ref> to better understand the ALSC-CR improvements.
§.§ Evaluating Coreference Ability
Performing well on ALSC-CR requires strong CR ability, as CR associates the aspect with its sentiment. To verify that the improvement in Section <ref> is attributable to the ALSC model's improved CR ability, we estimate the CR ability by evaluating on DPR. Since we have an ALSC model for each random seed used for training (Section <ref>), we run DPR evaluation on the ALSC random seed model with the highest ALSC-CR val set performance.
The DPR task involves predicting the antecedent of the given pronoun. This is precisely the ability required for good performance on ALSC-CR (which contains only definite pronoun cases), making DPR ideal to measure the CR ability of our models. Other CR datasets like OntoNotes <cit.> are not as suitable as DPR because DPR only focuses on definite pronouns, which is the ability we are interested in. Similarly, DPR is also the only CR dataset suitable for auxiliary training, but the size makes this infeasible as discussed in Sec. <ref>.
We use a DPR variant for generative models where input is of the form: "Humans were afraid of robots as *they* were strong.", and the objective is to predict what the highlighted pronoun (*they*) is referring to <cit.>.
Evaluating ALSC models on DPR (Table <ref>) confirms that the ALSC-CR performance gains may be attributable to the improved CR ability of the model due to auxiliary fine-tuning. Experiments show that Commongen and QQP fine-tuned models show a drastically improved (and statistically significant) CR ability of up to ~16%. This explains their improved ALSC-CR performance. Using CosmosQA, we see a statistically significant ~5% deterioration in CR ability which does not lead to statistically significant changes in ALSC-CR performance.
§ RELATED WORK
The importance of CR has been noted in prior ABSA work. <cit.> use aspect sentiments for performing CR, demonstrating a correlation between CR and sentiment classification. <cit.> use CR to detect aspects from related reviews, for the reviews lacking explicit aspects. Instead, we consider an LLM's intra-sentence CR ability, considering only reviews with explicit aspects as having an aspect is critical to ALSC. <cit.> use CR in aspect extraction, but only for identifying duplicate references among proposed aspects.<cit.> use CR to solve their dependency parser component's inability to correctly associate opinion words with pronouns. In our work, we consider the CR problem in end-to-end state-of-the-art ALSC LLM models. <cit.> improve BERT LLM's CR ability for opinion-mining, using a method relying on external knowledge bases.
§ CONCLUSION
Since real world reviews vary widely, we need ALSC models which can handle various kinds of reviews, including those requiring CR. Although LLMs generally perform well on ALSC, our experiments provide evidence that LLMs can have poor performance on ALSC reviews requiring CR ability. We show that this problem can be alleviated by fine-tuning with certain auxiliary tasks before fine-tuning on the target tasks. Our framework for evaluating and improving an LLM's performance on CR cases can be applied for other tasks as well. Such a framework is critical for developing any model deployed in the real world. In the future, we will explore if auxiliary training can reduce the target task training that is needed for CR cases.
§ LIMITATIONS
* Even though we have successfully demonstrated a framework to handle CR-containing reviews by using auxiliary fine-tuning, we have not found which auxiliary tasks to definitively use for target tasks other than ALSC. The auxiliary task must be found using the framework proposed in our work.
* Our test set is composed of ~350 manually identified examples are guaranteed to require CR ability. However, it is common for ALSC datasets to be small. The bench-marking datasets Twitter, Lap14, Rest16 and Rest15 all have ~500-600 aspects for analysis <cit.> which is close to our dataset. To reduce the variability due to a relatively small test set, we use multiple random seeds for robustness <cit.>.
Due to the specific problem we are targeting, it is difficult to create more examples than this using existing sources. During qualitative analysis, we had considered many ALSC datasets (SemEval datasets, Twitter, MAMS) but found that the CR problem was most pronounced in the restaurant domain (Rest16, MAMS). Example: laptop reviews rarely use explicit aspects <cit.>, leading to few CR cases in Lap14 dataset.
* Ours is the first work to demonstrate this CR problem in language models, thus there are few benchmarks against which we can compare our solution.
* We use the T5-large LLM for our experiments which requires a significant amount of computational resources for training. This leads to a high cost both financially and environmentally <cit.>.
acl_natbib
§ HYPERPARAMETERS
Learning rates for both auxiliary fine tuning and ALSC training steps are picked from {5e-4, 1e-4, 5e-5} and {1e-3, 5e-4, 1e-4} respectively, after running for three random seeds and selecting the rates giving max F1 score for their respective validation dataset. For auxiliary fine-tuning, the learning rates for all auxiliary tasks were found to be 1e-4, except for SQuAD with Aux Fraction as 1.0 for which we found learning rate as 5e-5. For ALSC target task training, the learning rate was found to be 5e-4 in all cases except when using Commongen task for fine tuning with Aux Fraction as 0.1 for which we found learning rate as 1e-4.
Batch size for training is taken as 16 to maximise GPU utilisation. We train for 30 epochs to allow for convergence, while using an early stopping mechanism.
§ MODEL DETAILS
For our LLM, we use the T5-large implementation on Huggingface.[ <https://huggingface.co/t5-large>]
§ DATASET CLEANUP
Following existing work <cit.> we disregard reviews with no aspects, and also the aspects labeled as having "conflict" sentiment polarity to prevent a class imbalance problem due to low count of "conflict" class.
§ DATASET DETAILS
Here we present some more details of the ALSC-CR dataset. The aspect polarity distribution is presented in Table <ref>. Note that it is possible to have multiple pronouns in each of the CR cases.
The sentiment distribution of ALSC-CR test set is shown in Table <ref>.
For constructing ALSC-CR, we use standard ALSC datasets (MAMS and Rest16). MAMS's original train set along with data from Rest16 train set is used for training. For validation, we use the original validation sets from MAMS and Rest16, in addition to Pronoun cases from MAMS test and Rest16. The composition of the validation dataset is such that we use minimal Pronoun cases for validation while having sufficient CR cases for testing. Details of the composition of ALSC-CR are shown in Table <ref>.
§ ERROR ANALYSIS BY PRONOUN
We analyse the errors and improvements seen for individual pronouns (in reviews) when ALSC-CR is evaluated with different ALSC models. Since a few pronouns have very low counts as per Table <ref>, we only analyse the ones which have count greater than 15.
For all pronouns analysed, we find improvements in prediction accuracy for the models fine-tuned with auxiliary tasks, compared to the baseline model which has not auxiliary fine-tuning. Results are shown in Table <ref>.
§ TRAINING PROMPTS
We present the training prompts used in Table <ref>.
§ VISUALISING AUXILIARY TRAINING RESULTS
In Figure <ref>, we visually show the performance of auxiliary trained models on ALSC-CR (same results as Table <ref>). We can see that there is little correlation between the auxiliary dataset fraction and the mean F1 performance, making it necessary to explore various fraction settings.
§ TRAINING DETAILS
For fine tuning the T5-large model, we use 1 NVIDIA V100 GPU, 6 CPU cores with 4 GB memory per core. We run training jobs with a 71 hour time limit.
|
http://arxiv.org/abs/2307.04107v1 | 20230709062020 | Efficient Approximation Algorithms for Scheduling Coflows with Precedence Constraints in Identical Parallel Networks to Minimize Weighted Completion Time | [
"Chi-Yeh Chen"
] | cs.DS | [
"cs.DS"
] |
Model-Based End-to-End Learning for Multi-Target Integrated Sensing and Communication
José Miguel Mateos-Ramos, Student Member, IEEE,
Christian Häger, Member, IEEE,
Musa Furkan Keskin, Member, IEEE,
Luc Le Magoarou, Member, IEEE,
Henk Wymeersch, Senior Member, IEEE
This work was supported, in part, by a grant from the Chalmers AI Research Center Consortium (CHAIR), by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), the Swedish Foundation for Strategic Research (SSF) (grant FUS21-0004, SAICOM), Hexa-X-II, part of the European Union’s Horizon Europe research and innovation programme under Grant Agreement No 101095759., and Swedish Research Council (VR grant 2022-03007). The work of C. Häger was also supported by the Swedish Research Council under grant no. 2020-04718.
José Miguel Mateos-Ramos, Christian Häger, Musa Furkan Keskin and Henk Wymeersch are with the Department of Electrical Engineering, Chalmers University of Technology, Sweden (email: [email protected]; [email protected]; [email protected]; [email protected]).
Luc Le Magoarou is with INSA Rennes, CNRS, IETR - UMR 6164, F-35000, Rennes, France (email: [email protected]).
Accepted 08-Jul-2023. Received 18-Jun-2023; in original form 23-May-2023
=============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This paper focuses on the problem of coflow scheduling with precedence constraints in identical parallel networks, which is a well-known 𝒩𝒫-hard problem. Coflow is a relatively new network abstraction used to characterize communication patterns in data centers. Both flow-level scheduling and coflow-level scheduling problems are examined, with the key distinction being the scheduling granularity. The proposed algorithm effectively determines the scheduling order of coflows by employing the primal-dual method. When considering workload sizes and weights that are dependent on the network topology in the input instances, our proposed algorithm for the flow-level scheduling problem achieves an approximation ratio of O(χ) where χ is the coflow number of the longest path in the directed acyclic graph (DAG). Additionally, when taking into account workload sizes that are topology-dependent, the algorithm achieves an approximation ratio of O(Rχ), where R represents the ratio of maximum weight to minimum weight. For the coflow-level scheduling problem, the proposed algorithm achieves an approximation ratio of O(mχ), where m is the number of network cores, when considering workload sizes and weights that are topology-dependent. Moreover, when considering workload sizes that are topology-dependent, the algorithm achieves an approximation ratio of O(Rmχ). In the coflows of multi-stage job scheduling problem, the proposed algorithm achieves an approximation ratio of O(χ). Although our theoretical results are based on a limited set of input instances, experimental findings show that the results for general input instances outperform the theoretical results, thereby demonstrating the effectiveness and practicality of the proposed algorithm.
Scheduling algorithms, approximation algorithms, coflow, precedence constraints, datacenter network, identical parallel network.
§ INTRODUCTION
With the evolution of technology, a large volume of computational demands has become the norm. As personal computing resources are no longer sufficient, cloud computing has emerged as a solution for accessing significant computational resources. With the increasing demand, large-scale data centers have become essential components of cloud computing. In these data centers, the benefits of application-aware network scheduling have been proven, particularly for distributed applications with structured traffic patterns <cit.>. The widespread use of data-parallel computing applications such as MapReduce <cit.>, Hadoop <cit.>, Dryad <cit.>, and Spark <cit.> has led to a proliferation of related applications <cit.>.
In these data-parallel applications, tasks can be divided into multiple computational stages and communication stages, which are executed alternately. The computational stages generate a substantial amount of intermediate data (flows) that needs to be transmitted across various machines for further processing during the communication stages. Due to the large number of applications generating significant data transmission requirements, robust data transmission and scheduling capabilities are crucial for data centers. The overall communication pattern within the data center can be abstracted by coflow traffic, representing the interaction of flows between two sets of machines <cit.>.
A coflow refers to a set of interconnected flows, where the completion time of the entire group depends on the completion time of the last flow within the set <cit.>. Previous studies related to coflows <cit.> have primarily focused on the single-core model <cit.>. However, technological advancements have led to the emergence of data centers that operate on multiple parallel networks in order to improve efficiency <cit.>. One such architecture is the identical or heterogeneous parallel network, where multiple network cores function in parallel, providing combined bandwidth by simultaneously serving traffic.
This study addresses the problem of coflow scheduling with precedence constraints in identical parallel networks. The objective is to schedule these coflows in the parallel networks in a way that minimizes the weighted total completion time of coflows. We consider both flow-level scheduling and coflow-level scheduling. In the flow-level scheduling problem, flows within a coflow can be distributed across different network cores. Conversely, in the coflow-level scheduling problem, all flows within a coflow are required to be transmitted in the same network core. The key difference between these two problems lies in their scheduling granularity. The coflow-level scheduling problem, being a coarse-grained scheduling, can be quickly solved but yields relatively poorer results. On the other hand, the flow-level scheduling problem, being a fine-grained scheduling, takes more time to solve but produces superior scheduling results. It is worth noting that, although these two problems exhibit differences in time complexity when solved using linear programming, in the case of the flow-level scheduling problem using the primal-dual method, the decision of scheduling flows is transformed into the decision of scheduling coflows. This transformation leads to the solving time being equivalent to that of the coflow-level scheduling problem.
§.§ Related Work
The concept of coflow abstraction was initially introduced by Chowdhury and Stoica <cit.> to characterize communication patterns within data centers. The scheduling problem for coflows has been proven to be strongly 𝒩𝒫-hard, indicating the need for efficient approximation algorithms rather than exact solutions. Due to the easy reduction of the concurrent open shop problem to coflow scheduling, where only the diagonal elements of the demand matrix have values, solving the concurrent open shop problem within a factor better than 2-ϵ is 𝒩𝒫-hard <cit.>, implying the hardness of the coflow scheduling problem as well.
Since the proposal of the coflow abstraction, extensive research has been conducted on coflow scheduling <cit.>. Qiu et al.<cit.> presented the first deterministic polynomial-time approximation algorithm with an ratio of 67/3. Subsequently, Ahmadi et al. <cit.> proved that the technique proposed by Qiu et al.<cit.> actually yields only a deterministic 76/3-approximation algorithm for coflow scheduling with release times.
Khuller et al. <cit.> also proposed an approximation algorithm for coflow scheduling with arbitrary release times, achieving a ratio of 12.
Recent research by Shafiee and Ghaderi <cit.> has resulted in an impressive approximation algorithm for the coflow scheduling problem, achieving an approximation ratio of 5. Additionally, Ahmadi et al. <cit.> have made significant contributions to this field by proposing a primal-dual algorithm that enhances the computational efficiency of coflow scheduling.
In the coflow scheduling problem within a heterogeneous parallel network, Huang et al. <cit.> introduced an O(m)-approximation algorithm, where m represents the number of network cores. On the other hand, Tian et al. <cit.> were the first to propose the problem of scheduling coflows of multi-stage jobs, and they provided a O(N)-approximation algorithm, where N represents the number of servers in the network. Furthermore, Shafiee and Ghaderi <cit.> proposed a polynomial-time algorithm that achieves an approximation ratio of O(χ̃log(N)/log(log(N))), where χ̃ denotes the maximum number of coflows in a job.
§.§ Our Contributions
This paper focuses on addressing the problem of coflow scheduling with precedence constraints in identical parallel networks and presents a range of algorithms and corresponding results. The specific contributions of this study are outlined below:
* When considering workload sizes and weights that are dependent on the network topology in the input instances, the proposed algorithm for the flow-level scheduling problem achieves an approximation ratio of O(χ) where χ is the coflow number of the longest path in the directed acyclic graph (DAG).
* When taking into account workload sizes that are topology-dependent, the proposed algorithm for flow-level scheduling problem achieves an approximation ratio of O(Rχ), where R represents the ratio of maximum weight to minimum weight.
* For the coflow-level scheduling problem, the proposed algorithm achieves an approximation ratio of O(mχ), where m is the number of network cores, when considering workload sizes and weights that are topology-dependent.
* When considering workload sizes that are topology-dependent, the algorithm for the coflow-level scheduling problem achieves an approximation ratio of O(Rmχ).
* In the coflows of multi-stage job scheduling problem, the proposed algorithm achieves an approximation ratio of O(χ).
A summary of our theoretical findings is provided in Table <ref> where TDWS stands for topology-dependent workload sizes, while TDW stands for topology-dependent weights.
§.§ Organization
The structure of this paper is outlined as follows. In Section <ref>, an introduction is provided, covering fundamental notations and preliminary concepts that will be referenced in subsequent sections. Following that, the primary algorithms are presented in the following sections: Section <ref> provides an overview of the algorithm addressing the flow-level scheduling problem, while Section <ref> elaborates on the algorithm designed for the coflow-level scheduling problem. To address the scheduling problem for the coflows of multi-stage jobs, our algorithm is discussed in Section <ref>. In Section <ref>, a comparative analysis is conducted to evaluate the performance of our proposed algorithms in comparison to the previous algorithm. Lastly, in Section <ref>, our findings are summarized and meaningful conclusions are drawn.
§ NOTATION AND PRELIMINARIES
The identical parallel network consists of a collection of m non-blocking switches, each with dimensions of N × N. These switches form the infrastructure of the network, where N input links are connected to N source servers, and N output links are connected to N destination servers. These switches serve as practical and intuitive models for the network core. Network architectures such as Fat-tree or Clos <cit.> can be employed to construct networks that provide complete bisection bandwidth. In this configuration, each switch's i-th input port is connected to the i-th source server, and the j-th output port is connected to the j-th destination server. Consequently, each source server (or destination server) has m simultaneous uplinks (or downlinks), where each link may consist of multiple physical connections in the actual network topology <cit.>. Let ℐ denote the set of source servers, and 𝒥 denote the set of destination servers. The network core can be visualized as a bipartite graph, with ℐ on one side and 𝒥 on the other. For simplicity, we assume that all network cores are identical, and the links within each core have the same capacity or speed.
A coflow is a collection of independent flows, and its completion time of a coflow is determined by the completion time of the last flow in the set, making it a critical metric for evaluating the efficiency of data transfers. The demand matrix D^(k)=(d_i,j,k)_i,j=1^N represents the specific data transfer requirements within coflow k. Each entry d_i,j,k in the matrix corresponds to the size of the flow that needs to be transmitted from input i to output j within the coflow. In the context of identical network cores, the flow size can be interpreted as the transmission time, as all cores possess the same capacity or speed. This simplification allows for easier analysis and optimization of coflow scheduling algorithms. To facilitate efficient management and routing of flows, each flow is identified by a triple (i, j, k), where i represents the source node, j represents the destination node, and k corresponds to the coflow. This identification scheme enables precise tracking and control of individual flows within the parallel network.
Furthermore, we assume that flows are composed of discrete data units, resulting in integer sizes. For simplicity, we assume that all flows within a coflow are simultaneously initiated, as demonstrated in <cit.>.
This paper investigates the problem of coflow scheduling with release times and precedence constraints. The problem involves a set of coflows denoted by 𝒦, where coflow k is released into the system at time r_k. The completion time of coflow k, denoted as C_k, represents the time required for all its flows to finish processing. Each coflow k∈𝒦 is assigned a positive weight w_k. Let R be the ratio between the maximum weight and the minimum weight. The relationships between coflows can be modeled using a directed acyclic graph (DAG) G=(𝒦, E), where an arc (k', k)∈ E and k', k∈𝒦 indicate that all flows of coflow k' must be completed before any flow of coflow k can be scheduled. This relationship is denoted as k'≺ k. The DAG has a coflow number of χ, which represents the length of the longest path in the DAG. The objective is to schedule coflows in an identical parallel network, considering the precedence constraints, in order to minimize the total weighted completion time of the coflows, denoted as ∑_k∈𝒦 w_kC_k. For clarity, different subscript symbols are used to represent different meanings of the same variables. Subscript i represents the index of the source (or input port), subscript j represents the index of the destination (or output port), and subscript k represents the index of the coflow. For instance, ℱ_i denotes the set of flows with source i, and ℱ_j represents the set of flows with destination j. The symbols and terminology used in this paper are summarized in Table <ref>.
§ APPROXIMATION ALGORITHM FOR THE FLOW-LEVEL SCHEDULING PROBLEM
This section focuses on the flow-level scheduling problem, which allows for the transmission of different flows within a coflow through distinct network cores. We assume that coflows are transmitted at the flow level, ensuring that the data within a flow is allocated to the same core. We define ℱ_i as the collection of flows with source i, represented by ℱ_i={(i, j, k)| d_i,j,k>0, ∀ k∈𝒦, ∀ j∈𝒥}, and ℱ_j as the set of flows with destination j, given by ℱ_j={(i, j, k)| d_i,j,k>0, ∀ k∈𝒦, ∀ i∈ℐ}. For any subset S⊆ℱ_i (or S⊆ℱ_j), we define d(S)=∑_(i, j, k)∈ S d_i,j,k as the sum of data size over all flows in S and d^2(S)=∑_(i, j, k)∈ S d_i,j,k^2 as the sum of squares of data size over all flows in S. Additionally, we introduce the function f(S) as follows:
f(S) = d(S)^2+ d^2(S)/2m.
The flow-level scheduling problem can be formulated as a linear programming relaxation, which is expressed as follows:
min ∑_k ∈𝒦 w_k C_k <ref>
s.t. C_k≥ C_i,j,k, ∀ k∈𝒦, ∀ i∈ℐ, ∀ j∈𝒥
C_i,j,k≥ r_k+d_i,j,k, ∀ k∈𝒦, ∀ i∈ℐ, ∀ j∈𝒥
C_i,j,k≥ C_k'+d_i,j,k, ∀ k, k'∈𝒦:k'≺ k,
∀ i∈ℐ, ∀ j∈𝒥
∑_(i, j, k)∈ Sd_i,j,kC_i,j,k≥ f(S), ∀ i∈ℐ, ∀ S⊆ℱ_i
∑_(i, j, k)∈ Sd_i,j,kC_i,j,k≥ f(S), ∀ j∈𝒥, ∀ S⊆ℱ_j
In the linear program (<ref>), the variable C_k represents the completion time of coflow k in the schedule, and C_i,j,k denotes the completion time of flow (i, j, k). Constraint (<ref>) specifies that the completion time of coflow k is bounded by the completion times of all its flows, ensuring that no flow finishes after the coflow. Constraint (<ref>) guarantees that the completion time of any flow (i, j, k) is at least its release time r_k plus the time required for its transmission. To capture the precedence constraints among coflows, constraint (<ref>) indicates that all flows of coflow k' must be completed before any flow of coflow k can be scheduled. Constraints (<ref>) and (<ref>) introduce lower bounds on the completion time variables at the input and output ports, respectively.
We define L_i,S,k as the sum of the loads on input port i for coflow k in the set S. Similarly, L_j,S,k represents the sum of the loads on output port j for coflow k in the set S. To formulate the dual linear program, we have the following expressions:
L_i,S,k =∑_(i',j',k')∈ S|i'=i,k'=kd_i',j',k',
L_j,S,k =∑_(i',j',k')∈ S|j'=j,k'=kd_i',j',k'.
The dual linear program is given by
max ∑_k ∈𝒦∑_i ∈ℐ∑_j ∈𝒥α_i, j, k(r_k+d_i,j,k)
+∑_i ∈ℐ∑_S ⊆ℱ_iβ_i,S f(S)
+∑_j ∈𝒥∑_S ⊆ℱ_jβ_j,S f(S)
+ ∑_(k', k) ∈ E∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k d_i,j,k <ref>
s.t. ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k
+∑_i ∈ℐ∑_S⊆ℱ_iβ_i,SL_i,S,k
+∑_j ∈𝒥∑_S⊆ℱ_jβ_j,SL_j,S,k
+∑_(k',k)∈ E∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k
-∑_(k,k')∈ E∑_i ∈ℐ,j ∈𝒥γ_k, i, j, k'≤ w_k, ∀ k∈𝒦
α_i, j, k≥ 0, ∀ k∈𝒦, ∀ i∈ℐ,
∀ j∈𝒥
β_i, S≥ 0, ∀ i∈ℐ, ∀ S⊆ℱ_i
β_j, S≥ 0, ∀ j∈𝒥, ∀ S⊆ℱ_j
γ_k', i, j, k≥ 0, ∀ (k', k)∈ E, ∀ i∈ℐ,
∀ j∈𝒥
It is important to note that each flow (i, j, k) is associated with a dual variable α_i, j, k, and for every coflow k, there exists a corresponding constraint. Additionally, for any subset S ⊆ℱ_i (or S ⊆ℱ_j) of flows, there exists a dual variable β_i, S (or β_j, S). To facilitate the analysis and design of algorithms, we define γ_k', k as the sum of γ_k', i, j, k over all input ports i and output ports j in their respective sets ℐ and 𝒥:
γ_k', k=∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k.
Significantly, it should be emphasized that the cost of any feasible dual solution provides a lower bound for OPT, which represents the cost of an optimal solution.
This implies that the cost attained by any valid dual solution ensures that OPT cannot be less than that. In other words, if we obtain a feasible dual solution with a certain cost, we can be certain that the optimal solution, which represents the best possible cost, will not have a lower cost than the one achieved by the dual solution.
The primal-dual algorithm, as depicted in Appendix <ref>, Algorithm <ref>, is inspired by the research of Davis et al. <cit.> and Ahmadi et al. <cit.>, respectively. This algorithm constructs a feasible schedule iteratively, progressing from right to left, determining the processing order of coflows. Starting from the last coflow and moving towards the first, each iteration makes crucial decisions in terms of increasing dual variables α, β or γ. The guidance for these decisions is provided by the dual linear programming (LP) formulation. The algorithm offers a space complexity of O(Nn) and a time complexity of O(n^2), where N represents the number of input/output ports, and n represents the number of coflows.
Consider a specific iteration in the algorithm. At the beginning of this iteration, let 𝒦 represent the set of coflows that have not been scheduled yet, and let k denote the coflow with the largest release time. In each iteration, a decision must be made regarding whether to increase dual variables α, β or γ.
If the release time r_k is significantly large, increasing the α dual variable results in substantial gains in the objective function value of the dual problem. On the other hand, if L_μ_1(r) (or L_μ_2(r) if L_μ_2(r)≥ L_μ_1(r)) is large, raising the β variable leads to substantial improvements in the objective value. Let κ be a constant that will be optimized later.
If r_k>κ· L_μ_1(r)/m (or r_k>κ· L_μ_2(r)/m if L_μ_2(r)≥ L_μ_1(r)), the α dual variable is increased until the dual constraint for coflow k becomes tight. Consequently, coflow k is scheduled to be processed as early as possible and before any previously scheduled coflows.
In the case where r_k≤κ· L_μ_1(r)/m (or r_k≤κ· L_μ_2(r)/m if L_μ_2(r)≥ L_μ_1(r)), the dual variable β_μ_1(r),𝒢_i (or β_μ_2(r),𝒢_j if Lμ_2(r)≥ L_μ_1(r)) is increased until the dual constraint for coflow k' becomes tight.
In this step, we begin by identifying a candidate coflow, denoted as k', with the minimum value of β. We then examine whether this coflow still has unscheduled successors. If it does, we continue traversing down the chain of successors until we reach a coflow that has no unscheduled successors, which we will refer to as t_1.
Once we have identified coflow t_1, we set its β and γ values such that the dual constraint for coflow t_1 becomes tight. Moreover, we ensure that the β value of coflow t_1 matches that of the candidate coflow k'.
The flow-driven-list-scheduling algorithm, as depicted in Algorithm <ref>, leverages a list scheduling rule to determine the order of coflows to be scheduled. In order to provide a clear and consistent framework, we assume that the coflows have been pre-ordered based on the permutation generated by Algorithm <ref>, where σ(k)=k for all k∈𝒦. Thus, the coflows are scheduled sequentially in this predetermined order.
Within each coflow, the flows are scheduled based on a non-increasing order of their sizes, breaking ties arbitrarily. Specifically, for every flow (i, j, k), the algorithm identifies the least loaded network core, denoted as h^*, and assigns the flow (i, j, k) to this core.
The algorithm's steps involved in this assignment process are outlined in lines <ref>-<ref>.
A flow is deemed "ready" for scheduling only when all of its predecessors have been fully transmitted. The algorithm then proceeds to schedule all the flows that are both ready and have been released but remain incomplete. These scheduling steps, encapsulated in lines <ref>-<ref>, have been adapted from the work of Shafiee and Ghaderi <cit.>.
§.§ Analysis
In this section, we present a comprehensive analysis of the proposed algorithm, establishing its approximation ratios. Specifically, we demonstrate that the algorithm achieves an approximation ratio of O(χ) when considering workload sizes and weights that are topology-dependent in the input instances. Additionally, when considering workload sizes that are topology-dependent in the input instances, the algorithm achieves an approximation ratio of O(Rχ) where R is the ratio of maximum weight to minimum weight. It is crucial to note that our analysis assumes that the coflows are arranged in the order determined by the permutation generated by Algorithm <ref>, where σ(k)=k for all k∈𝒦.
Let S_k={1, 2, …, k} denote the set of the first k coflows. Furthermore, we define S_i,k as the set of flows from the first k coflows at input port i. Formally, S_i,k is defined as follows:
S_i,k={(i, j, k')| d_i,j,k'>0, ∀ k'∈{1,…,k}, ∀ j∈𝒥}.
Similarly, S_j,k represents the set of flows from the first k coflows at output port j, defined as:
S_j,k={(i, j, k')| d_i,j,k'>0, ∀ k'∈{1,…,k}, ∀ i∈ℐ}.
Let β_i,k=β_i,S_i,k and β_j,k=β_j,S_j,k. These variables capture the dual variables associated with the sets S_i,k and S_j,k.
Moreover, we introduce the notation μ_1(k) to denote the input port with the highest load in S_k, and μ_2(k) to represent the output port with the highest load in S_k. Recall that d(S) represents the sum of loads for all flows in a subset S. Therefore, d(S_i,k) corresponds to the total load of flows from the first k coflows at input port i, and d(S_j,k) corresponds to the total load of flows from the first k coflows at output port j.
Finally, let L_i,k=∑_j∈𝒥 d_i,j,k denote the total load of flows from coflow k at input port i, and L_j,k=∑_i∈ℐ d_i,j,k denote the total load of flows from coflow k at output port j.
Let us begin by presenting several key observations regarding the primal-dual algorithm.
The following statements hold.
* Every nonzero β_i,S can be written as β_μ_1(k),k for some coflow k.
* Every nonzero β_j,S can be written as β_μ_2(k),k for some coflow k.
* For every set S_μ_1(k),k that has a nonzero β_μ_1(k),k variable, if k' ≤ k then r_k'≤κ· d(S_μ_1(k),k)/m.
* For every set S_μ_2(k),k that has a nonzero β_μ_2(k),k variable, if k' ≤ k then r_k'≤κ· d(S_μ_2(k),k)/m.
* For every coflow k that has a nonzero α_μ_1(k), 1, k, r_k>κ· d(S_μ_1(k),k)/m.
* For every coflow k that has a nonzero α_1, μ_2(k), k, r_k>κ· d(S_μ_2(k),k)/m.
* For every coflow k that has a nonzero α_μ_1(k), 1, k or a nonzero α_1, μ_2(k), k, if k'≤ k then r_k'≤ r_k.
The validity of each of the aforementioned observations can be readily verified and directly inferred from the steps outlined in Algorithm <ref>.
For any subset S, we have that d(S)^2≤ 2m· f(S).
Let C_k represent the completion time of coflow k when scheduled according to Algorithm <ref>. For any coflow k, we have C_k≤ a·max_k'≤ kr_k'+χ(d(S_μ_1(k),k)+d(S_μ_2(k),k)/m)+(1-2/m)C_k^*, where a=0 signifies the absence of release times, and a=1 indicates the presence of arbitrary release times.
First, let's consider the case where there is no release time and no precedence constraints. In this case, the completion time bound for each coflow can be expressed by the following inequality:
Ĉ_k ≤ 1/md(S_μ_1(k), k) + 1/md(S_μ_2(k),k)+(1-2/m) max_i, j d_i,j,k
Now, let v_1v_2⋯ v_f be the longest path of coflow k, where v_f=k. Then, we can derive the following inequalities:
C_k ≤ ∑_q=1^fĈ_v_q
≤ ∑_q=1^f1/md(S_μ_1(q), q) + 1/md(S_μ_2(q),q) +(1-2/m) max_i, j d_i,j,q
≤ ∑_q=1^f1/md(S_μ_1(k), k) + 1/md(S_μ_2(k),k) +(1-2/m) max_i, j d_i,j,q
= f/md(S_μ_1(k), k) + f/md(S_μ_2(k),k) +∑_q=1^f(1-2/m) max_i, j d_i,j,q
≤ f/md(S_μ_1(k), k) + f/md(S_μ_2(k),k)+ (1-2/m) C_k^*.
When considering the release time, coflow k is transmitted starting at max_k'≤ kr_k' at the latest. This proof confirms the lemma.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then γ_k', k=0 holds for all k, k'∈𝒦.
Given that w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ, and j ∈𝒥, the β value of coflow k is smaller than that of coflow k'. As a result, there is no need to order the coflow k by setting γ_k',k.
For every coflow k, ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'= w_k.
A coflow k is included in the permutation of Algorithm <ref> only if the constraint ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_S⊆ℱ_iβ_i,SL_i,S,k+∑_j ∈𝒥∑_S⊆ℱ_jβ_j,SL_j,S,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'≤ w_k becomes tight for this particular coflow, resulting in ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'= w_k.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the total cost of the schedule is bounded as follows.
∑_kw_kC_k ≤ (a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
By applying Lemma <ref>, we have
∑_k=1^n w_kC_k ≤ ∑_k=1^n w_k· A +(1-2/m) ∑_k=1^n w_kC_k^*
where A=a·max_k'≤ kr_k'+χd(S_μ_1(k),k)+d(S_μ_2(k),k)/m. We have ∑_k=1^n w_k C_k^*=OPT. Now we focus on the first term ∑_k=1^n w_k· A. By applying Lemmas <ref> and <ref>, we have
∑_k=1^n w_k· A = ∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, k· A
+∑_k=1^n∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k· A
+∑_k=1^n∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k· A
Let's begin by bounding ∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, k· A.
By applying Observation <ref> parts (<ref>), (<ref>) and (<ref>), we have
∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, k·A
≤ ∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥 α_i, j, k(a·r_k+2χ·r_k/κ)
≤ (a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥 α_i, j, k·r_k
Now we bound ∑_k=1^n∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k· A. By applying Observation <ref> part (<ref>), we have
∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k·A
≤ ∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k(a·max_ℓ≤kr_ℓ+χd(S_μ_1(k),k)+d(S_μ_2(k),k)/m)
≤ ∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k(a·κ·d(S_μ_1(k'),k')/m + 2χ·d(S_μ_1(k),k)/m)
≤ (a·κ+2χ)∑_k'=1^n∑_i ∈ℐ∑_k≤k'β_i,k'L_i,kd(S_μ_1(k'),k')/m
≤ (a·κ+2χ)∑_k'=1^n∑_i ∈ℐβ_i,k'∑_k≤k'L_i,kd(S_μ_1(k'),k')/m
= (a·κ+2χ)∑_k'=1^n∑_i ∈ℐβ_i,k'd(S_i,k')d(S_μ_1(k'),k')/m
≤ (a·κ+2χ)∑_k'=1^n∑_i ∈ℐβ_i,k'(d(S_μ_1(k'),k'))^2/m
By sequentially applying Observation <ref> and Observation <ref> part (<ref>), we can upper bound this expression by
2(a·κ+2χ)∑_i ∈ℐ∑_k=1^nβ_i,kf(S_μ_1(k),k)
= 2(a·κ+2χ)∑_k=1^nβ_μ_1(k),kf(S_μ_1(k),k)
≤ 2(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
By Observation <ref> and Observation <ref> parts (<ref>) and (<ref>), we also can obtain
∑_k=1^n∑_j ∈𝒥∑_k'≥kβ_j,k'L_j,k ·A
≤ 2(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
Therefore,
∑_kw_kC_k ≤ (a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4χ+2-2/m for the flow-level scheduling problem with release times.
To schedule coflows without release times, the application of Lemma <ref> (with a = 1) indicates the following:
∑_kw_kC_k ≤ (1+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2(κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2(κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
In order to minimize the approximation ratio, we can substitute κ=1/2 and obtain the following result:
∑_kw_kC_k ≤ (4χ+1)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+(4χ+1)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+(4χ+1)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT
≤ (4χ+2-2/m)· OPT.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4χ+1-2/m for the flow-level scheduling problem without release times.
To schedule coflows without release times, the application of Lemma <ref> (with a = 0) indicates the following:
∑_kw_kC_k ≤ (2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2· 2χ∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2· 2χ∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
In order to minimize the approximation ratio, we can substitute κ=1/2 and obtain the following result:
∑_kw_kC_k ≤ 4χ∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+4χ∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+4χ∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT
≤ (4χ+1-2/m)· OPT.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the inequality ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ (R-1)(∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k) holds for all k∈𝒦.
We demonstrate the case of L_μ_1(r)>L_μ_2(r), while the other case of L_μ_1(r)≤ L_μ_2(r) can be obtained using the same approach, yielding the same result. If coflow k does not undergo the adjustment of the order by setting γ_k',k, then ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ 0.
Suppose coflow p is replaced by coflow k through the adjustment of γ_k',k.
Let
B=∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k,
B_p=∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,p+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,p,
H=∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k',
H_p=∑_k'∈𝒦|(k',p)∈ Eγ_k',p-∑_k'∈𝒦|(p,k')∈ Eγ_p,k',
R'=w_k/w_p.
If coflow k undergoes the adjustment of the order by setting γ_k',k, then
H = w_k-B-L_i,k/L_i,p(w_p-B_p-H_p)
≤ w_k-B-w_p+B_p+H_p
≤ w_k-w_p+H_p
≤ w_k-w_p
= R'-1/R'w_k
≤ R-1/Rw_k
The inequalities (<ref>) and (<ref>) are due to L_i,p≤ L_i,k for all i ∈ℐ. The inequality (<ref>) is due to
H_p≤ 0. Based on Lemma <ref>, we know that ∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'= w_k.
Thus, we obtain:
H ≤ (R-1)(∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k).
This proof confirms the lemma.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the total cost of the schedule is bounded as follows.
∑_kw_kC_k ≤ R(a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2R(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2R(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
According to lemma <ref>, we have
∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ R(∑_i ∈ℐ∑_j ∈𝒥α_i, j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k) holds for all k∈𝒦.
Then, following a similar proof to lemma <ref>, we can derive result
∑_kw_kC_k ≤ R(a+2χ/κ)∑_k=1^n∑_i ∈ℐ∑_j ∈𝒥α_i, j, kr_k
+2R(a·κ+2χ)∑_i ∈ℐ∑_S⊆ℱ_iβ_i,Sf(S)
+2R(a·κ+2χ)∑_j ∈𝒥∑_S⊆ℱ_jβ_j,Sf(S)
+(1-2/m)· OPT.
By employing analogous proof techniques to theorems <ref> and <ref>, we can establish the validity of the following two theorems:
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4Rχ+R+1-2/m for the flow-level scheduling problem with release times.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4Rχ+1-2/m for the flow-level scheduling problem without release times.
§ APPROXIMATION ALGORITHM FOR THE COFLOW-LEVEL SCHEDULING PROBLEM
This section focuses on the coflow-level scheduling issue, which pertains to the transmission of flows within a coflow via a single core. It is important to remember that L_i,k=∑_j=1^Nd_i,j,k and L_j,k=∑_i=1^Nd_i,j,k, where L_i,k denotes the overall load at source i for coflow k, and L_j,k denotes the overall load at destination j for coflow k.
Let
f_i(S) = ∑_k∈ S L_i,k^2+(∑_k∈ S L_i,k)^2/2m
and
f_j(S) = ∑_k∈ S L_j,k^2+(∑_k∈ S L_j,k)^2/2m
for any subset S⊆𝒦.
To address this problem, we propose a linear programming relaxation formulation as follows:
min ∑_k ∈𝒦 w_k C_k <ref>
s.t. C_k≥ r_k+L_i,k, ∀ k∈𝒦, ∀ i∈ℐ
C_k≥ r_k+L_j,k, ∀ k∈𝒦, ∀ j∈𝒥
C_k≥ C_k'+L_ik, ∀ k, k'∈𝒦, ∀ i∈ℐ:
k'≺ k
C_k≥ C_k'+L_jk, ∀ k, k'∈𝒦, ∀ j∈𝒥:
k'≺ k
∑_k∈ SL_i,kC_k≥ f_i(S) ∀ i∈ℐ, ∀ S⊆𝒦
∑_k∈ SL_j,kC_k≥ f_j(S) ∀ j∈𝒥, ∀ S⊆𝒦
In the linear program (<ref>), the completion time C_k is defined for each coflow k in the schedule. Constraints (<ref>) and (<ref>) ensure that the completion time of any coflow k is greater than or equal to its release time r_k plus its load. To account for the precedence constraints among coflows, constraints (<ref>) and (<ref>) indicate that all flows of coflow k' must be completed before coflow k can be scheduled. Additionally, constraints (<ref>) and (<ref>) establish lower bounds for the completion time variable at the input and output ports, respectively.
The dual linear program is given by
max ∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k+L_i,k)
+∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k+L_j,k)
+∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
+ ∑_(k', k) ∈ E∑_i ∈ℐγ_k', i, k L_i,k
+ ∑_(k', k) ∈ E∑_j ∈𝒥γ_k', j, k L_j,k <ref>
s.t. ∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k
+∑_i ∈ℐ∑_S⊆𝒦/k∈ Sβ_i,SL_i,k
+∑_j ∈𝒥∑_S⊆𝒦/k∈ Sβ_j,SL_j,k
+∑_(k',k)∈ E∑_i ∈ℐγ_k', i, k
+∑_(k',k)∈ E∑_j ∈𝒥γ_k', j, k
-∑_(k,k')∈ E∑_i ∈ℐγ_k, i, k'
-∑_(k,k')∈ E∑_j ∈𝒥γ_k, j, k'≤ w_k, ∀ k∈𝒦
α_i, k≥ 0, ∀ k∈𝒦, ∀ i∈ℐ
α_j, k≥ 0, ∀ k∈𝒦, ∀ j∈𝒥
β_i, S≥ 0, ∀ i∈ℐ, ∀ S⊆𝒦
β_j, S≥ 0, ∀ j∈𝒥, ∀ S⊆𝒦
γ_k', i, k≥ 0, ∀ (k', k)∈ E, ∀ i∈ℐ
γ_k', j, k≥ 0, ∀ (k', k)∈ E, ∀ j∈𝒥
Let γ_k', k=∑_i ∈ℐγ_k', i, k+∑_j ∈𝒥γ_k', j, k. Notice that for every coflow k, there exists two dual variables α_i, k and α_j, k, and there is a corresponding constraint. Additionally, for every subset of coflows S, there are two dual variables β_i, S and β_j, S. For the precedence constraints, there are two dual variables γ_k', k and γ_k, k'. Algorithm <ref> in Appendix <ref> presents the primal-dual algorithm which has a space complexity of O(Nn) and a time complexity of O(n^2), where N represents the number of input/output ports and n represents the number of coflows.
The coflow-driven-list-scheduling, as outlined in Algorithm <ref>, operates as follows. To ensure clarity and generality, we assume that the coflows are arranged in an order determined by the permutation generated by Algorithm <ref>, where σ(k)=k for all k∈𝒦. We schedule all the flows within each coflow iteratively, following the sequence provided by this list.
For each coflow k, we identify the network core h^* that can transmit coflow k in a manner that minimizes its completion time (lines <ref>-<ref>). Subsequently, we transmit all the flows allocated to network core h (lines <ref>-<ref>).
In summary, the coflow-driven-list-scheduling algorithm works by iteratively scheduling the flows within each coflow, following a predetermined order. It determines the optimal network core for transmitting each coflow to minimize their completion times, and then transmits the allocated flows for each core accordingly.
§.§ Analysis
In this section, we present a comprehensive analysis of the proposed algorithm, establishing its approximation ratios. Specifically, we demonstrate that the algorithm achieves an approximation ratio of O(mχ) when considering workload sizes and weights that are topology-dependent in the input instances. Additionally, when considering workload sizes that are topology-dependent in the input instances, the algorithm achieves an approximation ratio of O(Rmχ) where R is the ratio of maximum weight to minimum weight. It is crucial to note that our analysis assumes that the coflows are arranged in the order determined by the permutation generated by Algorithm <ref>, where σ(k)=k for all k∈𝒦.
We would like to emphasize that S_k={1, 2, …, k} represents the set of the first k coflows. We define β_i,k=β_i,S_k and β_j,k=β_j,S_k for convenience. Moreover, we define L_i(S_k)=∑_k'≤ k L_i, k' and L_j(S_k)=∑_k'≤ k L_j, k' to simplify the notation. Furthermore, let μ_1(k) denote the input port with the highest load among the coflows in S_k, and μ_2(k) denote the output port with the highest load among the coflows in S_k. Hence, we have L_μ_1(k)(S_k)=∑_k'≤ k L_μ_1(k), k' and L_μ_2(k)(S_k)=∑_k'≤ k L_μ_2(k), k'.
Let us begin by presenting several key observations regarding the primal-dual algorithm.
The following statements hold.
* Every nonzero β_i,S can be written as β_μ_1(k),k for some coflow k.
* Every nonzero β_j,S can be written as β_μ_2(k),k for some coflow k.
* For every set S_k that has a nonzero β_μ_1(k),k variable, if k' ≤ k then r_k'≤κ· L_μ_1(k)(S_k)/m.
* For every set S_k that has a nonzero β_μ_2(k),k variable, if k' ≤ k then r_k'≤κ· L_μ_2(k)(S_k)/m.
* For every coflow k that has a nonzero α_μ_1(k), k, r_k>κ· L_μ_1(k)(S_k)/m.
* For every coflow k that has a nonzero α_μ_2(k), k, r_k>κ· L_μ_2(k)(S_k)/m.
* For every coflow k that has a nonzero α_μ_1(k), k or a nonzero α_μ_2(k), k, if k'≤ k then r_k'≤ r_k.
The validity of each of the aforementioned observations can be readily verified and directly inferred from the steps outlined in Algorithm <ref>.
For any subset S, we have that (∑_k∈ S L_i,k)^2≤ 2m· f_i(S) and (∑_k∈ S L_j,k)^2≤ 2m· f_j(S).
Let C_k represent the completion time of coflow k when scheduled according to Algorithm <ref>. For any coflow k, we have C_k≤ a·max_k'≤ kr_k'+χ(L_μ_1(k)(S_k)+L_μ_2(k)(S_k)), where a=0 signifies the absence of release times, and a=1 indicates the presence of arbitrary release times.
First, let's consider the case where there is no release time and no precedence constraints. In this case, the completion time bound for each coflow can be expressed by the following inequality:
Ĉ_k ≤ L_μ_1(k)(S_k)+L_μ_2(k)(S_k)
Now, let v_1v_2⋯ v_f be the longest path of coflow k, where v_f=k. Then, we can derive the following inequalities:
C_k ≤ ∑_q=1^fĈ_v_q
≤ ∑_q=1^f L_μ_1(q)(S_q)+L_μ_2(q)(S_q)
≤ ∑_q=1^f L_μ_1(k)(S_k)+L_μ_2(k)(S_k)
= f(L_μ_1(k)(S_k)+L_μ_2(k)(S_k))
When considering the release time, coflow k is transmitted starting at max_k'≤ kr_k' at the latest. This proof confirms the lemma.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then γ_k', k=0 holds for all k, k'∈𝒦.
Given that w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ, and j ∈𝒥, the β value of coflow k is smaller than that of coflow k'. As a result, there is no need to order the coflow k by setting γ_k',k.
For every coflow k, ∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'= w_k.
A coflow k is included in the permutation of Algorithm <ref> only if the constraint
∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k +∑_i ∈ℐ∑_S⊆𝒦/k∈ Sβ_i,SL_i,k +∑_j ∈𝒥∑_S⊆𝒦/k∈ Sβ_j,SL_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ w_k becomes tight for this particular coflow, resulting in ∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'= w_k.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the total cost of the schedule is bounded as follows.
∑_kw_kC_k ≤ (a+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(a+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2(a·κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2(a·κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
By applying Lemma <ref>, we have
∑_k=1^n w_kC_k
≤∑_k=1^n w_k·(a·max_k'≤ kr_k+χ(L_μ_1(k)(S_k)+L_μ_2(k)(S_k)))
Let A=a·max_k'≤ kr_k+χ(L_μ_1(k)(S_k)+L_μ_2(k)(S_k)). By applying Lemmas <ref> and <ref>, we have
∑_k=1^n w_kC_k ≤ ∑_k=1^n(∑_i ∈ℐα_i, k+∑_k=1^n∑_j ∈𝒥α_j, k)· A
+∑_k=1^n∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k· A
+∑_k=1^n∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k· A
Let's begin by bounding ∑_k=1^n∑_i ∈ℐα_i, k· A+∑_k=1^n∑_j ∈𝒥α_j, k· A.
By applying Observation <ref> parts (<ref>), (<ref>) and (<ref>), we have
∑_k=1^n(∑_i ∈ℐ α_i, k+∑_k=1^n∑_j ∈𝒥 α_j, k)·A
≤ ∑_k=1^n(∑_i ∈ℐ α_i, k+∑_k=1^n∑_j ∈𝒥 α_j, k)(a·r_k+2χ·m·r_k/κ)
≤ (a+2χ·m/κ)∑_k=1^n(∑_i ∈ℐ α_i, k+∑_k=1^n∑_j ∈𝒥 α_j, k)·r_k
Now we bound ∑_k=1^n∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k· A. By applying Observation <ref> part (<ref>), we have
∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k ·A
≤ ∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k(a·max_k'≤kr_k'+L_μ_1(k)(S_k)+L_μ_2(k)(S_k))
≤ ∑_k=1^n∑_i ∈ℐ∑_k'≥kβ_i,k'L_i,k(a·κ·L_μ_1(k)(S_k)/m + 2χ·L_μ_1(k)(S_k))
≤ (a·κ+2χ·m)∑_k'=1^n∑_i ∈ℐ∑_k≤k'β_i,k'L_i,kL_μ_1(k)(S_k)/m
≤ (a·κ+2χ·m)∑_k'=1^n∑_i ∈ℐβ_i,k'∑_k≤k'L_i,kL_μ_1(k)(S_k)/m
= (a·κ+2χ·m)∑_k'=1^n∑_i ∈ℐβ_i,k'L_i(S_k)L_μ_1(k)(S_k)/m
≤ (a·κ+2χ·m)∑_k'=1^n∑_i ∈ℐβ_i,k'(L_μ_1(k)(S_k))^2/m
By sequentially applying Observation <ref> and Observation <ref> part (<ref>), we can upper bound this expression by
2(a·κ+2χ·m)∑_i ∈ℐ∑_k=1^nβ_i,kf_i(S_μ_1(k),k)
= 2(a·κ+2χ·m)∑_k=1^nβ_μ_1(k),kf_i(S_μ_1(k),k)
≤ 2(a·κ+2χ·m)∑_i ∈ℐ∑_S⊆𝒦β_i,Sf_i(S)
By Observation <ref> and Observation <ref> parts (<ref>) and (<ref>), we also can obtain
∑_k=1^n∑_j ∈𝒥∑_k'≥kβ_j,k'L_j,k ·A
≤ 2(a·κ+2χ·m)∑_j ∈𝒥∑_S⊆𝒦β_j,Sf_j(S)
Therefore,
∑_kw_kC_k ≤ (a+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(a+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2(a·κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2(a·κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4χ m+1 for the coflow-level scheduling problem with release times.
To schedule coflows without release times, the application of Lemma <ref> (with a = 1) indicates the following:
∑_kw_kC_k ≤ (1+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(1+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2(κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2(κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
In order to minimize the approximation ratio, we can substitute κ=1/2 and obtain the following result:
∑_kw_kC_k ≤ (4χ· m+1)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(4χ· m+1)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+(4χ· m+1)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+(4χ· m+1)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
≤ (4χ· m+1) · OPT.
If w_k'≥ w_k, L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4χ m for the coflow-level scheduling problem without release times.
To schedule coflows without release times, the application of Lemma <ref> (with a = 0) indicates the following:
∑_kw_kC_k ≤ (2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2(2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2(2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
In order to minimize the approximation ratio, we can substitute κ=1/2 and obtain the following result:
∑_kw_kC_k ≤ (4χ· m)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+(4χ· m)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+(4χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+(4χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
≤ 4χ· m · OPT.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the inequality ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ (R-1)(∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k) holds for all k∈𝒦.
If coflow k does not undergo the adjustment of the order by setting γ_k',k, then ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ 0. If coflow k undergoes the adjustment of the order by setting γ_k',k, then we have ∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤R-1/Rw_k. Based on Lemma <ref>, we know that ∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_(k',k)∈ Eγ_k', k-∑_(k,k')∈ Eγ_k, k'= w_k.
Thus, we obtain:
∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ (R-1)(∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k).
This proof confirms the lemma.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then the total cost of the schedule is bounded as follows.
∑_kw_kC_k ≤ R(a+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+R(a+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2R(a·κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2R(a·κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
According to lemma <ref>, we have
∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k+∑_k'∈𝒦|(k',k)∈ Eγ_k',k-∑_k'∈𝒦|(k,k')∈ Eγ_k,k'≤ R(∑_i ∈ℐα_i, k+∑_j ∈𝒥α_j, k+∑_i ∈ℐ∑_k'≥ kβ_i,k'L_i,k+∑_j ∈𝒥∑_k'≥ kβ_j,k'L_j,k) holds for all k∈𝒦.
Then, following a similar proof to lemma <ref>, we can derive result
∑_kw_kC_k ≤ R(a+2χ· m/κ)∑_k ∈𝒦∑_i ∈ℐα_i, k(r_k)
+R(a+2χ· m/κ)∑_k ∈𝒦∑_j ∈𝒥α_j, k(r_k)
+2R(a·κ+2χ· m)∑_i ∈ℐ∑_S ⊆𝒦β_i,S f_i(S)
+2R(a·κ+2χ· m)∑_j ∈𝒥∑_S ⊆𝒦β_j,S f_j(S)
By employing analogous proof techniques to theorems <ref> and <ref>, we can establish the validity of the following two theorems:
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4Rχ m+R for the flow-level scheduling problem with release times.
If L_i,k'≤ L_i,k and L_j,k'≤ L_j,k hold for all (k',k)∈ E, i ∈ℐ and j ∈𝒥, then there exists a deterministic, combinatorial, polynomial time algorithm that achieves an approximation ratio of 4Rχ m for the flow-level scheduling problem without release times.
§ COFLOWS OF MULTI-STAGE JOBS SCHEDULING PROBLEM
In this section, we will focus on addressing the coflows of multi-stage job scheduling problem. We will modify the linear programs (<ref>) by introducing a set 𝒯 to represent the jobs and a set 𝒯_t to represent the coflows that belong to job t. We will also incorporate an additional constraint (<ref>), which will ensure that the completion time of any job is limited by its coflows. Our objective is to minimize the total weighted completion time for a given set of multi-stage jobs. Assuming that all coflows within the same job have the same release time. The resulting problem can be expressed as a linear programming relaxation, which is as follows:
min ∑_t ∈𝒯 w_t C_t <ref>
s.t. (<ref>)-(<ref>)
C_t≥ C_k, ∀ t∈𝒯, ∀ k∈𝒯_t
The dual linear program is given by
max ∑_k ∈𝒦∑_i ∈ℐ∑_j ∈𝒥α_i, j, k(r_k+d_i,j,k)
+∑_i ∈ℐ∑_S ⊆ℱ_iβ_i,S f(S)
+∑_j ∈𝒥∑_S ⊆ℱ_jβ_j,S f(S)
+ ∑_(k', k) ∈ E∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k d_i,j,k <ref>
s.t. ∑_k∈𝒯_t∑_i ∈ℐ∑_j ∈𝒥α_i, j, k
+∑_k∈𝒯_t∑_i ∈ℐ∑_S⊆ℱ_iβ_i,SL_i,S,k
+∑_k∈𝒯_t∑_j ∈𝒥∑_S⊆ℱ_jβ_j,SL_j,S,k
+∑_k∈𝒯_t∑_(k',k)∈ E∑_i ∈ℐ,j ∈𝒥γ_k', i, j, k
-∑_k∈𝒯_t∑_(k,k')∈ E∑_i ∈ℐ,j ∈𝒥γ_k, i, j, k'≤ w_t, ∀ t∈𝒯
α_i, j, k≥ 0, ∀ k∈𝒦, ∀ i∈ℐ,
∀ j∈𝒥
β_i, S≥ 0, ∀ i∈ℐ, ∀ S⊆ℱ_i
β_j, S≥ 0, ∀ j∈𝒥, ∀ S⊆ℱ_j
γ_k', i, j, k≥ 0, ∀ (k', k)∈ E,
∀ i∈ℐ, ∀ j∈𝒥
Let α_i, j, t = ∑_k∈𝒯_tα_i, j, k, L_i,S,t=∑_k∈𝒯_t L_i,S,k and L_j,S,t=∑_k∈𝒯_t L_j,S,k for all t∈𝒯.
Algorithm <ref> in Appendix <ref> determines the order of job scheduling. Since there are no precedence constraints among the jobs, there is no need to set γ to satisfy precedence constraints. We transmit the jobs sequentially, and within each job, the coflows are transmitted in topological-sorting order. As the values of γ are all zero, similar to the proof of Theorem <ref>, we can obtain the following theorem. Unlike Theorem <ref>, this result is not limited to the workload sizes and weights that are topology-dependent in the input instances.
The proposed algorithm achieves an approximation ratio of O(χ) for minimizing the total weighted completion time of a given set of multi-stage jobs.
§ EXPERIMENTAL RESULTS
In order to evaluate the effectiveness of the proposed algorithm, this section conducts simulations comparing its performance to that of a previous algorithm. Both synthetic and real traffic traces are used for these simulations, without considering release time. The subsequent sections present and analyze the results obtained from these simulations.
§.§ Comparison Metrics
Since the cost of the feasible dual solution provides a lower bound on the optimal value of the coflow scheduling problem, we calculate the approximation ratio by dividing the total weighted completion time achieved by the algorithms by the cost of the feasible dual solution.
§.§ Randomly Generated Graphs
In this section, we examine a collection of randomly generated graphs that are created based on a predefined set of fundamental characteristics.
* DAG size, n: The number of coflows in the DAG.
* Out degree, deg: Out degree of a node.
* Parallelism factor, (p) <cit.>: The calculation of the levels in the DAG involves randomly generating a number from a uniform distribution. The mean value of this distribution is √(n)/p. The generated number is then rounded up to the nearest integer, determining the number of levels. Additionally, the width of each level is calculated by randomly generating a number from a uniform distribution. The mean value for this distribution is p ×√(n), and it is also rounded up to the nearest integer <cit.>. Graphs with a larger value of p tend to have a smaller χ, while those with a smaller value of p have a larger χ.
* Workload, (W_min, W_max, L_min, L_max) <cit.>:
Each coflow is accompanied by a description (W_min, W_max, L_min, L_max) that provides information about its characteristics. To determine the number of non-zero flows within a coflow, two values, w_1 and w_2, are randomly selected from the interval [W_min, W_max]. These values are then assigned to the input and output links of the coflow in a random manner. The size of each flow is randomly chosen from the interval [L_min, L_max]. The construction of all coflows by default follows a predefined distribution based on the coflow descriptions. This distribution consists of four configurations: (1, 4, 1, 10), (1, 4, 10, 1000), (4, N, 1, 10), and (4, N, 10, 1000), with proportions of 41%, 29%, 9%, and 21%, respectively. Here, N represents the number of ports in the core.
Let level_k denote the level of coflow k, and let Lv(k)={k'∈𝒦 | level_k < level_k'} represent the set of coflows that have a higher level than k. When constructing a DAG, only a subset of Lv(k) can be selected as successors for each coflow k. For coflow k, a set of successors is randomly chosen with a probability of deg/|Lv(k)|. To assign weights to each coflow, positive integers are randomly and uniformly selected from the interval [1, 100].
§.§ Results
Figure <ref> illustrates the approximation ratio of the proposed algorithm compared to the previous algorithm for synthetic traces. The problem size ranges from 5 to 25 coflows in five network cores, with input and output links set to N=10. For each instance, we set deg=3, p=1, and χ≥ 2. The proposed algorithms demonstrate significantly smaller approximation ratios than 4χ+2-2/m. Furthermore, FDLS outperforms Weaver by approximately 4.7% to 7.5% within this problem size range. Although there are no restrictions on the workload's load and weights being topology-dependent for each instance, we still obtain results lower than 4χ+2-2/m. This demonstrates the excellent performance of the algorithm in general scenarios.
The effects of flow density were compared by categorizing the coflows into three instances: dense, sparse, and combined. For each instance, the number of flows was randomly selected from either the range [N, N^2] or [1, N], depending on the specific instance. In the combined instance, each coflow has a 50% probability of being set to sparse and a 50% probability of being set to dense. Figure <ref> illustrates the approximation ratio of synthetic traces for 100 randomly chosen dense and combined instances, comparing the previous algorithm with the proposed algorithm. The problem size consisted of 25 coflows in five network cores, with input and output links set to N=10. For each instance, we set deg=3, p=1, and χ≥ 2. In the dense case, Weaver achieved an approximation ratio of 2.80, while FDLS achieved an approximation ratio of 2.66, resulting in a 5.12% improvement with Weaver. In the combined case, FDLS outperformed Weaver by 2.52%. Importantly, the proposed algorithm demonstrated a greater improvement in the dense case compared to the combined case.
Figure <ref> illustrates the approximation ratio of synthetic traces for varying numbers of network cores, comparing the previous algorithm to the proposed algorithm when all coflows are released simultaneously at time 0. The problem size consists of 25 coflows distributed across 5 to 25 network cores, with input and output links set to N=10. For each instance, we set deg=3, p=1, and χ≥ 2. Remarkably, the proposed algorithm consistently achieves significantly smaller approximation ratios compared to the theoretical bound of 4χ+2-2/m. As the number of network cores increases, the approximation ratio also tends to increase. This observation can be attributed to the widening gap between the cost of the feasible dual solution and the cost of the optimal integer solution as the number of network cores grows. Consequently, this leads to a notable discrepancy between the experimental approximation ratio and the actual approximation ratio. Importantly, across different numbers of network cores, FDLS outperforms Weaver by approximately 1.79% to 5.30%.
Figure <ref> illustrates the approximation ratio of synthetic traces for varying parallelism factor (p), comparing the previous algorithm to the proposed algorithm when all coflows are released simultaneously at time 0. The problem size consists of 25 coflows distributed across 5 network cores, with input and output links set to N=10. For each instance, we set deg=3, p=1, and χ≥ 2. According to our settings, the coflow number of the longest path in the DAG (χ) exhibits an increasing trend as the parallelism factor p decreases. Correspondingly, the approximation ratio also shows an upward trend with a decrease in the parallelism factor p. This empirical finding aligns with the theoretical analysis, demonstrating a linear relationship between the approximation ratio and χ.
We present the simulation results of the real traffic trace obtained from Hive/MapReduce traces captured from Facebook's 3000-machine cluster, consisting of 150 racks. This real traffic trace has been widely used in previous research simulations <cit.>. The trace dataset comprises a total of 526 coflows. In Figure <ref>, we depict the approximation ratio of the real traces for different thresholds of the number of flows. That is, we apply a filter to the set of coflows based on the condition that the number of flows is equal to or greater than the threshold value. For each instance, we set deg=3, p=1, and χ≥ 2. Notably, the proposed FDLS algorithm outperforms the Weaver algorithm by approximately 4.84% to 3.11% across various thresholds. Furthermore, as the number of flows increases, the approximation ratio decreases. This observation is consistent with our previous findings, suggesting a decreasing trend in the approximation ratio as the number of coflows increases.
§ CONCLUDING REMARKS
This paper focuses on the study the problem of coflow scheduling with release times and precedence constraints in identical parallel networks. The algorithm we propose effectively solves the scheduling order of coflows using the primal-dual method. The primal-dual algorithm has a space complexity of O(Nn) and a time complexity of O(n^2). When considering workload sizes and weights that are topology-dependent in the input instances, our proposed algorithm for the flow-level scheduling problem achieves an approximation ratio of O(χ). Furthermore, when considering workload sizes that are topology-dependent in the input instances, the algorithm achieves an approximation ratio of O(Rχ). For the coflow-level scheduling problem, the proposed algorithm attains an approximation ratio of O(mχ) when considering workload sizes and weights that are topology-dependent in the input instances. Moreover, when considering workload sizes that are topology-dependent in the input instances, the algorithm achieves an approximation ratio of O(Rmχ). In the coflows of multi-stage job scheduling problem, the proposed algorithm achieves an approximation ratio of O(χ). Although our theoretical results are based on a limited set of input instances, experimental findings show that the results for general input instances outperform the theoretical results, thereby demonstrating the effectiveness and practicality of the proposed algorithm.
10
url@rmstyle
Agarwal2018
S. Agarwal, S. Rajakrishnan, A. Narayan, R. Agarwal, D. Shmoys, and A. Vahdat,
“Sincronia: Near-optimal network design for coflows,” in Proceedings
of the 2018 ACM Conference on SIGCOMM, ser. SIGCOMM '18.1em plus
0.5em minus 0.4emNew York, NY, USA: Association for Computing
Machinery, 2018, p. 16–29.
ahmadi2020scheduling
S. Ahmadi, S. Khuller, M. Purohit, and S. Yang, “On scheduling coflows,”
Algorithmica, vol. 82, no. 12, pp. 3604–3629, 2020.
al2008scalable
M. Al-Fares, A. Loukissas, and A. Vahdat, “A scalable, commodity data center
network architecture,” ACM SIGCOMM computer communication review,
vol. 38, no. 4, pp. 63–74, 2008.
Bansal2010
N. Bansal and S. Khot, “Inapproximability of hypergraph vertex cover and
applications to scheduling problems,” in Automata, Languages and
Programming, S. Abramsky, C. Gavoille, C. Kirchner, F. Meyer auf der Heide,
and P. G. Spirakis, Eds.1em plus 0.5em minus 0.4emBerlin,
Heidelberg: Springer Berlin Heidelberg, 2010, pp. 250–261.
borthakur2007hadoop
D. Borthakur, “The hadoop distributed file system: Architecture and design,”
Hadoop Project Website, vol. 11, no. 2007, p. 21, 2007.
Chowdhury2012
M. Chowdhury and I. Stoica, “Coflow: A networking abstraction for cluster
applications,” in Proceedings of the 11th ACM Workshop on Hot Topics
in Networks, ser. HotNets-XI.1em plus 0.5em minus 0.4emNew
York, NY, USA: Association for Computing Machinery, 2012, p. 31–36.
Chowdhury2015
——, “Efficient coflow scheduling without prior knowledge,” in
Proceedings of the 2015 ACM Conference on SIGCOMM, ser. SIGCOMM
'15.1em plus 0.5em minus 0.4emNew York, NY, USA: Association
for Computing Machinery, 2015, p. 393–406.
chowdhury2011managing
M. Chowdhury, M. Zaharia, J. Ma, M. I. Jordan, and I. Stoica, “Managing data
transfers in computer clusters with orchestra,” ACM SIGCOMM computer
communication review, vol. 41, no. 4, pp. 98–109, 2011.
Chowdhury2014
M. Chowdhury, Y. Zhong, and I. Stoica, “Efficient coflow scheduling with
varys,” in Proceedings of the 2014 ACM Conference on SIGCOMM, ser.
SIGCOMM '14.1em plus 0.5em minus 0.4emNew York, NY, USA:
Association for Computing Machinery, 2014, p. 443–454.
Daoud08
M. I. Daoud and N. Kharma, “A high performance algorithm for static task
scheduling in heterogeneous distributed computing systems,” Journal of
Parallel and Distributed Computing, vol. 68, no. 4, pp. 399 – 409, 2008.
DAVIS2013121
J. M. Davis, R. Gandhi, and V. H. Kothari, “Combinatorial algorithms for
minimizing the weighted sum of completion times on a single machine,”
Operations Research Letters, vol. 41, no. 2, pp. 121–125, 2013.
Dean2008
J. Dean and S. Ghemawat, “Mapreduce: Simplified data processing on large
clusters,” Communications of the ACM, vol. 51, no. 1, p. 107–113,
jan 2008.
dogar2014decentralized
F. R. Dogar, T. Karagiannis, H. Ballani, and A. Rowstron, “Decentralized
task-aware scheduling for data center networks,” ACM SIGCOMM Computer
Communication Review, vol. 44, no. 4, pp. 431–442, 2014.
greenberg2009vl2
A. Greenberg, J. R. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri, D. A.
Maltz, P. Patel, and S. Sengupta, “Vl2: A scalable and flexible data center
network,” in Proceedings of the ACM SIGCOMM 2009 conference on Data
communication, 2009, pp. 51–62.
huang2016
X. S. Huang, X. S. Sun, and T. E. Ng, “Sunflow: Efficient optical circuit
scheduling for coflows,” in Proceedings of the 12th International on
Conference on emerging Networking EXperiments and Technologies, 2016, pp.
297–311.
Huang2020
X. S. Huang, Y. Xia, and T. S. E. Ng, “Weaver: Efficient coflow scheduling in
heterogeneous parallel networks,” in 2020 IEEE International Parallel
and Distributed Processing Symposium (IPDPS), 2020, pp. 1071–1081.
isard2007dryad
M. Isard, M. Budiu, Y. Yu, A. Birrell, and D. Fetterly, “Dryad: distributed
data-parallel programs from sequential building blocks,” in
Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on
Computer Systems 2007, 2007, pp. 59–72.
khuller2016brief
S. Khuller and M. Purohit, “Brief announcement: Improved approximation
algorithms for scheduling co-flows,” in Proceedings of the 28th ACM
Symposium on Parallelism in Algorithms and Architectures, 2016, pp.
239–240.
Qiu2015
Z. Qiu, C. Stein, and Y. Zhong, “Minimizing the total weighted completion time
of coflows in datacenter networks,” in Proceedings of the 27th ACM
Symposium on Parallelism in Algorithms and Architectures, ser. SPAA
'15.1em plus 0.5em minus 0.4emNew York, NY, USA: Association
for Computing Machinery, 2015, p. 294–303.
Sachdeva2013
S. Sachdeva and R. Saket, “Optimal inapproximability for scheduling problems
via structural hardness for hypergraph vertex cover,” in 2013 IEEE
Conference on Computational Complexity, 2013, pp. 219–229.
shafiee2018improved
M. Shafiee and J. Ghaderi, “An improved bound for minimizing the total
weighted completion time of coflows in datacenters,” IEEE/ACM
Transactions on Networking, vol. 26, no. 4, pp. 1674–1687, 2018.
shafiee2021scheduling
——, “Scheduling coflows with dependency graph,” IEEE/ACM
Transactions on Networking, 2021.
Shvachko2010
K. Shvachko, H. Kuang, S. Radia, and R. Chansler, “The hadoop distributed file
system,” in 2010 IEEE 26th Symposium on Mass Storage Systems and
Technologies (MSST), 2010, pp. 1–10.
Singh2015
A. Singh, J. Ong, A. Agarwal, G. Anderson, A. Armistead, R. Bannon, S. Boving,
G. Desai, B. Felderman, P. Germano, A. Kanagala, J. Provost, J. Simmons,
E. Tanda, J. Wanderer, U. Hölzle, S. Stuart, and A. Vahdat, “Jupiter
rising: A decade of clos topologies and centralized control in google's
datacenter network,” in Proceedings of the 2015ACM Conference on
SIGCOMM, ser. SIGCOMM '15.1em plus 0.5em minus 0.4emNew York,
NY, USA: Association for Computing Machinery, 2015, p. 183–197.
Tian18
B. Tian, C. Tian, H. Dai, and B. Wang, “Scheduling coflows of multi-stage jobs
to minimize the total weighted job completion time,” in IEEE INFOCOM
2018 - IEEE Conference on Computer Communications, 2018, pp. 864–872.
Topcuoglu02
H. Topcuoglu, S. Hariri, and M.-Y. Wu, “Performance-effective and
low-complexity task scheduling for heterogeneous computing,” IEEE
Transactions on Parallel and Distributed Systems, vol. 13, no. 3, pp.
260–274, Mar 2002.
zaharia2010spark
M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica, “Spark:
Cluster computing with working sets,” in 2nd USENIX Workshop on Hot
Topics in Cloud Computing (HotCloud 10), 2010.
Zhang2016
H. Zhang, L. Chen, B. Yi, K. Chen, M. Chowdhury, and Y. Geng, “Coda: Toward
automatically identifying and scheduling coflows in the dark,” in
Proceedings of the 2016 ACM Conference on SIGCOMM, ser. SIGCOMM
'16.1em plus 0.5em minus 0.4emNew York, NY, USA: Association
for Computing Machinery, 2016, p. 160–173.
zhao2015rapier
Y. Zhao, K. Chen, W. Bai, M. Yu, C. Tian, Y. Geng, Y. Zhang, D. Li, and
S. Wang, “Rapier: Integrating routing and scheduling for coflow-aware data
center networks,” in 2015 IEEE Conference on Computer Communications
(INFOCOM).1em plus 0.5em minus 0.4emIEEE, 2015, pp. 424–432.
§ THE PRIMAL-DUAL ALGORITHM OF SECTION <REF>
The primal-dual algorithm, presented in Algorithm <ref>, draws inspiration from the works of Davis et al. <cit.> and Ahmadi et al. <cit.>. This algorithm constructs a feasible schedule iteratively, progressing from right to left, determining the processing order of coflows. Starting from the last coflow and moving towards the first, each iteration makes crucial decisions in terms of increasing dual variables α, β or γ. The guidance for these decisions is provided by the dual linear programming (LP) formulation. The algorithm offers a space complexity of O(Nn) and a time complexity of O(n^2), where N represents the number of input/output ports, and n represents the number of coflows.
Consider a specific iteration in the algorithm. At the beginning of this iteration, let 𝒦 represent the set of coflows that have not been scheduled yet, and let k denote the coflow with the largest release time. In each iteration, a decision must be made regarding whether to increase dual variables α, β or γ.
If the release time r_k is significantly large, increasing the α dual variable results in substantial gains in the objective function value of the dual problem. On the other hand, if L_μ_1(r) (or L_μ_2(r) if L_μ_2(r)≥ L_μ_1(r)) is large, raising the β variable leads to substantial improvements in the objective value. Let κ be a constant that will be optimized later.
If r_k>κ· L_μ_1(r)/m (or r_k>κ· L_μ_2(r)/m if L_μ_2(r)≥ L_μ_1(r)), the α dual variable is increased until the dual constraint for coflow k becomes tight. Consequently, coflow k is scheduled to be processed as early as possible and before any previously scheduled coflows.
In the case where r_k≤κ· L_μ_1(r)/m (or r_k≤κ· L_μ_2(r)/m if L_μ_2(r)≥ L_μ_1(r)), the dual variable β_μ_1(r),𝒢_i (or β_μ_2(r),𝒢_j if Lμ_2(r)≥ L_μ_1(r)) is increased until the dual constraint for coflow k' becomes tight.
In this step, we begin by identifying a candidate coflow, denoted as k', with the minimum value of β. We then examine whether this coflow still has unscheduled successors. If it does, we continue traversing down the chain of successors until we reach a coflow that has no unscheduled successors, which we will refer to as t_1.
Once we have identified coflow t_1, we set its β and γ values such that the dual constraint for coflow t_1 becomes tight. Moreover, we ensure that the β value of coflow t_1 matches that of the candidate coflow k'.
Permuting Coflows
§ THE PRIMAL-DUAL ALGORITHM OF SECTION <REF>
Algorithm <ref> presents the primal-dual algorithm which has a space complexity of O(Nn) and a time complexity of O(n^2), where N represents the number of input/output ports and n represents the number of coflows.
Permuting Coflows
§ THE PRIMAL-DUAL ALGORITHM OF SECTION <REF>
Algorithm <ref> determines the order of job scheduling. Since there are no precedence constraints among the jobs, there is no need to set γ to satisfy precedence constraints.
Permuting Jobs
|
http://arxiv.org/abs/2307.04063v1 | 20230708235625 | Symmetry energy and neutron star properties constrained by chiral effective field theory calculations | [
"Yeunhwan Lim",
"Achim Schwenk"
] | nucl-th | [
"nucl-th",
"astro-ph.HE",
"nucl-ex"
] |
[E-mail: ][email protected]
Department of Physics, Yonsei University, Seoul 03722, South Korea
[E-mail: ][email protected]
Technische Universität Darmstadt, Department of Physics, 64289 Darmstadt, Germany
ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung GmbH, 64291 Darmstadt, Germany
Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg, Germany
We investigate the nuclear symmetry energy and neutron star properties using a Bayesian
analysis based on constraints from different chiral effective field theory calculations using
new energy density functionals that allow for large variations at high densities. Constraints
at high densities are included from observations of GW170817 and NICER. In particular, we
show that both NICER analyses lead to very similar posterior results for the symmetry
energy and neutron star properties when folded into our equation of state framework.
Using the posteriors, we provide results for the symmetry energy and the slope
parameter, as well as for the proton fraction, the speed of sound, and the central density
in neutron stars. Moreover, we explore correlations of neutron star radii with the pressure
and the speed of sound in neutron stars. Our 95% credibility ranges for the symmetry energy
S_v, the slope parameter L, and the radius of a 1.4 neutron star R_1.4 are
S_v=(30.6-33.9) MeV, L=(43.7-70.0) MeV, and R_1.4=(11.6-13.2) km. Our analysis
for the proton fraction shows that larger and-or heavier neutron stars are more likely to
cool rapidly via the direct Urca process. Within our equation of state framework a maximum
mass of neutron stars M_ max>2.1 indicates that the speed of sound
needs to exceed the conformal limit.
Symmetry energy and neutron star properties
constrained by chiral effective field theory calculations
Achim Schwenk
======================================================================================================
§ INTRODUCTION
Understanding dense matter is a central challenge in nuclear physics and astrophysics. In nature, dense matter exists in the core of neutron stars under extreme neutron-rich conditions. The properties of neutron-rich matter around nuclear densities are described by the nuclear symmetry energy and its density dependence. While there haven been impressive constraints from nuclear theory, nuclear experiments, and astrophysics (see, e.g., Refs. <cit.>), more precise determinations of the symmetry energy and its slope parameter L at saturation density, n_0 = 0.16 fm^-3, are still an open problem.
From the theoretical side, the symmetry energy is best constrained by controlled calculations of the equation of state (EOS) of neutron matter based on chiral effective field theory (EFT) interactions <cit.>. This yields values for the symmetry energy S_v at saturation density and the L parameter in the range of S_v = (30-35) MeV and L = (35-70) MeV. However, to describe the EOS to all densities in neutron stars requires extensions beyond the reach of chiral EFT calculations. To this end, different extensions, such as piecewise polytropes <cit.>, speed-of-sound based parametrizations <cit.>, nonparametric Gaussian processes <cit.>, or nuclear energy-density funcationals (EDFs) have been used (see, e.g., Ref. <cit.>).
Recently, new EDFs for the nuclear EOS has been introduced by Huth et al. <cit.>, which have the advantage to provide high density extrapolations that are consistent with causality and with a maximum of the speed of sound. These functionals allow for EOS calculations for the broad ranges of conditions reached in core-collapse supernovae and neutron star mergers. In this work, we use these new EDF EOSs to constrain the symmetry energy and neutron star properties based on a prior informed by chiral EFT calculations of neutron matter.
From the astrophysics side, the strongest constraint on the nuclear EOS comes from the observation of heavy two-solar-mass neutron stars <cit.>. Moreover, the heaviest well measured neutron stars, PSR J0740+6620, was recently also observed by NICER to provide constraints on its radius <cit.>. In addition, NICER observed the mass and radius of a typical-mass neutron star, PSR J0030+0451 <cit.>. The NICER analyses for both neutron stars by Riley et al. <cit.> and by Miller et al. <cit.> give different mass-radius posteriors, but agree within their uncertainties. The differences in the posteriors are reduced by including realistic assumptions for the EOS, and in this work we explicitly show that in our EDF EOS ensembles the results from both NICER analyses are very similar. In addition to the NICER constraints, we include in our Bayesian inference the tidal deformability information from GW170817 inferred by LIGO/Virgo <cit.>. Using the chiral EFT informed priors with the astro posteriors, we provide results for the symmetry energy and neutron star properties.
This paper is organized as follows. In Sec. <ref> we introduce our EOS framework using the new EDFs from Huth et al. <cit.>. These are fit to a range of chiral EFT calculations of neutron matter. Building on this EOS prior, we include constraints at high densities from observations of GW170817 and NICER using a Bayesian analysis. In Sec. <ref>, we investigate the posterior distributions for the symmetry energy and the slope parameter, as well as for the proton fraction, the speed of sound, and the central density in neutron stars. Moreover, we explore correlations of neutron star radii with the pressure and the speed of sound in neutron stars. Finally, we summarize our results and conclude in Sec. <ref>.
§ EQUATION OF STATE FRAMEWORK
The EOS describes the energy density and pressure of matter for given baryon density, composition, and temperature. Since we focus on cold neutron stars, we consider zero temperature. For a given EOS, the mass and radius of neutron stars follow by solving the Tolman-Oppenheimer-Volkoff (TOV) equations <cit.>. Our starting point will be the EOS of homogeneous matter, which we constrain by empirical ranges of the properties of symmetric nuclear matter around saturation density and by neutron matter calculations. Based on each EOS, we the calculate consistently the structure of the neutron star crust.
Since neutron stars are extremely neutron rich with proton fractions ∼ 5%, the most important constraints for the EOS come from neutron matter calculations. In this work, we focus on neutron matter calculations based on chiral EFT interactions, which has the advantage that chiral EFT predicts consistent many-body interactions and enables systematic uncertainty estimates based on the EFT expansion <cit.>. Neutron matter has been calculated based on chiral two- and three-nucleon interactions using many-body perturbation theory (MBPT) <cit.>, quantum Monte Carlo (QMC) methods <cit.>, self consistent Green's function (SCGF) methods <cit.>, and coupled cluster (CC) theory <cit.>. These calculations are able to include all interactions up to next-to-next-to-next-to-leading order (N^3LO) <cit.> and include uncertainty estimates from the EFT truncation <cit.>.
§.§ Energy density functionals
To extend the EOS to high density we use nonrelativistic EDFs, which depend on the baryon number density n
and proton fraction x of uniform matter. The baryonic energy density ε(n,x) is expressed as
ε(n,x) = 1/2m_N τ_n(n,x) + 1/2m_N τ_p(n,x)
+ (1-2x)^2 f_n(n) + [1-(1-2x)^2]f_s(n) ,
where τ_n/2m_N and τ_p/2m_N are the neutron and proton kinetic densities, with nucleon mass m_N. It was shown that the dependence on isospin asymmetry is to a very good approximation quadratic <cit.>, with the dominant non-quadratic contributions stemming from the kinetic densities, so that Eq. (<ref>) provides a very good approximation for asymmetric nuclear matter. The functionals f_n(n) and f_s(n) can be chosen to satisfy the constraints from neutron matter calculations
and symmetric nuclear matter properties, respectively.
For the interaction density functionals, we take the form introduced recently by Huth et al. <cit.>
f_n(n) = ∑_j=0^3 a_j n^2+j/3/d_j + n^(j+1)/3 ,
f_s(n) = ∑_j=0^3 b_j n^2+j/3/d_j + n^(j+1)/3 ,
where a_j, b_j are fit parameters and d_j = d fm^-1-j with parameter d=3 <cit.>. This corresponds to an expansion of the interaction energy density in powers of the Fermi momentum k_ F∼ n^1/3, and the denominator ensures that the interaction part becomes proportional to n^5/3 at higher densities. Note that without the denominator, the interaction part generally causes the speed of sound to exceed the speed of light beyond some baryon density. For a detailed discussion of these new functionals and the parameter choices, see Ref. <cit.>.
§.§ Constraints from neutron matter calculations based on chiral effective field theory
For neutron matter constraints we use the MBPT calculations from Ref. <cit.> based on different chiral NN+3N Hamiltonians, including the Hebeler+ interactions <cit.>, the NNLOsim potentials <cit.>, as well as the N^3LO 450 MeV and 500 MeV uncertainty bands <cit.> (using the NN EMN interactions <cit.>). The different neutron matter results and their uncertainties are given by the individual lines shown in Fig. <ref>. We use the individual lines to fit the a_j of the EDF for neutron matter, f_n(n) in Eq. (<ref>), based on the k_ F expansion and d=3.
The b_j of the corresponding symmetric matter part, f_s(n), are determined from empirical properties. We fit to the binding energy E/A(n_0)=-16 MeV at saturation density n_0 = 0.16 fm^-3, the incompressibility K=235 MeV, with K = 9 n^2 ^2(E/A)/ n^2(n_0,x=1/2), and the skewness Q = -300 MeV, with Q = 27 n^3 ^3(E/A)/ n^3(n_0,x=1/2). These values are extracted from Skyrme EDFs and constraints for nuclear matter properties <cit.>, see also Ref. <cit.>. Since neutron star properties are not very sensitive to symmetric nuclear matter, we do not vary all nuclear matter properties, but only explore the most uncertain value of Q in the following, see Sec. <ref>.
The uncertainties in our EDF EOSs are reflected in the covariance matrix of x⃗=(a⃗, b⃗) defined as
C_jk = 1/∑_i w_i∑_i w_i (x_j^i -⟨ x_j ⟩)(x_k^i -⟨ x_k ⟩) ,
where x_j^i is the set of fit parameters (a_j, b_j) for the i-th individual EOS, ⟨ x_j ⟩ represents the average of x_j, and w_i is the weight for each EOS. Since we do not vary the symmetric nuclear matter properties, in this work C_jk is 4 × 4 matrix for the a_j from the neutron matter EOSs only. In the initial set given by the 17 neutron matter EOSs, the weights are w_i=1, but when we implement Bayes statistics and inferences, w_i < 1. With the average ⟨ x_j ⟩ and the covariance matrix C_jk, a multivariate normal distribution can be used to generate an EOS ensemble based on our EDF EOSs. We note that the statistical uncertainties from this EOS ensemble have of course a prior sensitivity to the initial set of individual EOSs.
The resulting EDF EOS ensemble based on the multivariate normal distribution is shown in Fig. <ref> with the 95% credibility region in comparison to the individual EOSs based on MBPT calculations of neutron matter. The ensemble is based on 100,000 EOSs generated using the EDF, Eqs. (<ref>) and (<ref>), from the average ⟨ x_j ⟩ and the covariance matrix C_jk based on the individual neutron matter MBPT EOSs. The agreement between the band and the individual lines in Fig. <ref> indicates that the EDF EOS ensemble employed in this work can generalize chiral EFT results within their uncertainties. Moreover, the compare the EDF EOS ensemble to the unitary gas constraint <cit.> and observe in Fig. <ref> that this is nicely fulfilled by our EOSs.
§.§ Bayesian modelling
We incorporate the astrophysics constraints on the EOS by applying Bayes theorem, from which the posterior distribution results from the combination of the prior and likelihood,
P(a⃗| D) = P(D|a⃗)P(a⃗)/∫ da⃗ P(D|a⃗)P(a⃗) .
Here, P(a⃗) represents the EOS prior given by the EDF parameter space obtained from the neutron matter calculations and symmetric nuclear matter properties, D stands for the astrophysical data so that the P(D|a⃗) is the likelihood or conditional probability to obtain D for a given EDF with parameter set a⃗.
In our study, we include the astrophysical observations of GW170817 and NICER
to constrain the EOS at higher densities.
For the NICER mass-radius constraints for PSR J0030+0451 and PSR J0740+6620 we consider separately either the Amsterdam analysis of Riley et al. <cit.> or the Illinois/Maryland analysis of Miller et al. <cit.>. The heaviest neutron star mass of 2.08 ± 0.07, <cit.> is thus directly implemented through the NICER M-R information of PSR J0740+6620. Folding in the NICER constraints based on our prior leads to the likelihood for the EDF parameters <cit.>
P(NICER|a⃗) = ∫ dM dR P(M,R)
×δ(M-M(a⃗)) δ(R-R(a⃗)) ,
where M(a⃗) and R(a⃗) is the M-R relation for a given EDF EOS with parameter set a⃗ and P(M,R) is the M-R posterior distribution for each of the two NICER sources. The integral is carried out be discretizing the M- space, summing over all bins which are passed by the M(a⃗)-R(a⃗) relation and weighting those bins with the NICER posterior for each of the sources successively.
In addition to NICER, we use the tidal deformability information from GW170817 inferred by LIGO/Virgo <cit.>,
P(LIGO|a⃗) = ∫ dM_1 dΛ_1 dM_2 dΛ_2 P(M_1,Λ_1,M_2,Λ_2)
×δ(M_1-M_1(a⃗)) δ(Λ_1-Λ_1(a⃗))
×δ(M_2-M_2(a⃗)) δ(Λ_2-Λ_2(a⃗)) ,
where P(M_1,Λ_1,M_2,Λ_2) is the posterior distribution from LIGO/Virgo. We assume that the NICER and GW170817 analyses are independent each other so that combining both constraints, the likelihood is given by
P(D|a⃗) = P(NICER|a⃗) P(LIGO|a⃗) .
Multiplying the combined likelihood with the prior P(a⃗) and a normalization constant considering the integral in the denominator, we obtain the posterior distribution P(a⃗| D) for a given EDF EOS with parameter set a⃗.
§ RESULTS
Next we present our results for the properties of neutron stars and the symmetry energy based on the EOS framework developed in the previous section. This combines the information from neutron matter based on chiral EFT interactions, with empirical properties of symmetric nuclear matter, as well as astrophysical constraints from GW170817 and NICER using a family of EDFs for nucleonic matter. Since matter in neutron stars is very neutron-rich, we have focused more on the propagation of the theoretical uncertainties in our knowledge of neutron matter. An advantage of our EOS framework is that we use the same EDF to construct the crust and core EOS for neutron stars. In the following, we present our results for the neutron star mass and radius, the proton fraction, the speed of sound, and the central density in neutron stars. We also provide results for the symmetry energy and the slope parameter and explore correlations of neutron star radii with the pressure and the speed of sound in neutron stars.
§.§ Mass-radius relation
The mass and radius of neutron stars are obtained by solving the TOV equations for nonrotating stars. Figure <ref> shows the 95% credibility regions for the mass M and radius R generated from the multivariate normal distribution for the EDF EOSs based on an ensemble of ∼ 10^5 EOS. The top panel shows the prior distribution for the k_ F expansion using different values of d=1,3,5,7, and d=∞. The middle and lower panels show the posterior distribution including astrophysics information from GW170817 and the NICER analysis of Riley et al. <cit.> or the NICER analysis of Miller et al. <cit.>, respectively. Our results show that the posterior distributions obtained from the two different NICER analyses are very similar once the nuclear physics information is encoded in the EOS framework.
Regarding the different EDF choices, we find that the d=3 distribution is similar to the case of d=5, 7, and d=∞. However, large d, and in particular d=∞ allows for the speed of sound to become acausal, c_s^2 > 1 (in units with the speed of light c=1), as the density increases, which is not the case in either neutron or symmetric matter for d=3 by construction. In addition, as d=1 makes the interaction energy density rapidly behave like n^5/3, the EOS becomes soft at rather low densities compared to the larger d values. As a result the 95% credibility regions for mass and radius only extend slight above 2. Therefore, in the following, we will show results only for the EDF EOSs with d=3. Before doing so, we also list the radius ranges of typical 1.4 and 2 neutron stars to show the rather minor sensitivity to the choice of d (see Table <ref>).
In Table <ref> we give the prior and posterior ranges for the radius R_1.4 of a 1.4 neutron star at 95% (± 2σ) and 68% (± 1σ) credibility as well as the most likely radius for the EDF EOS ensembles with the k_ F expansion and different d values. For d=3, the 95% credibility prior range is R_1.4 = (9.87-13.19) km. Including the astrophysics information from GW170817 and the NICER analysis of Riley et al. <cit.> gives for 95% credibility posterior range R_1.4=(11.57-13.17) km while with the Miller et al. <cit.> analysis R_1.4=(11.65-13.23) km, or the combined range R_1.4=(11.6-13.2) km. Both NICER analyses thus give very similar posterior ranges with the result based on Miller et al. shifted to slightly larger radii. Overall, the radius range decreases by over 50% from 3.3 km for the prior to 1.6 km for the combined posterior, mainly by disfavoring the smaller radii in the prior range. Moreover, in the prior distribution for d=3, 72% EOSs have a maximum mass of neutron stars greater than 2.0, while for the posterior distribution, 97% (98%)EOSs have a maximum mass above 2.0 using
the NICER analysis of Riley et al. <cit.> (Miller et al. <cit.>).
In Fig. <ref>,we show the color-coded prior and posterior distributions for the case of d=3. In both posterior distributions, the most probable radii for neutron stars between 1.0 and 1.8 vary only within 0.3 km. Moreover, the mass and radius distribution for M>2.0 is very similar between for the prior and the two posteriors, because the astrophysics information mainly removes EOSs that give low maximum mass and small radii.
Table <ref> gives the prior and posterior ranges for the radius R_2.0 of a 2.0 neutron star for the EDF EOSs with d=3. The prior distribution shows a wider radius range because it does not include information of a massive neutron star. Again the two posterior ranges for R_2.0 are very similar and merely shifted by less than 100 m. In the case of d=3, the maximum mass of neutron stars among the ∼ 10^5 EOS ensemble reaches up to 2.23, while it can go up to 2.32 for d=∞.
§.§ Symmetry energy and L parameter
We can also extract the symmetry energy S_v and the slope parameter L from our calculations. This is shown in Fig. <ref> for the individual MBPT calculations for the different chiral NN+3N Hamiltonians from Ref. <cit.> as points, where the dashed (solid) line connects the 500 (450) MeV cutoff N^3LO results. As discussed, our EOS EDF ensembles are built from all the different chiral NN+3N results. The resulting 95% prior and posterior distributions are shown for the EDF EOS ensemble with the k_ F expansion and d=3. We find that the prior range for S_v and L is narrowed to larger values with the astrophysics constraints included. For both NICER analyses the posteriors are again very similar.
The 95% distributions can be parametrized by the mean values and the covariance matrix. For the prior distribution these are given by (mean values in MeV and convariance matrix in MeV^2):
⟨ S_v, L ⟩ = (31.96, 51.70) , Σ_S_v,L =
[ 0.79 6.73; 6.73 75.11 ] ,
while the posterior distributions for the astrophysical inferences are given for the Riley et al. <cit.> and Miller et al. <cit.> analysis, respectively,
⟨ S_v, L ⟩ = (32.23, 56.33) , Σ_S_v,L^ Riley=
[ 0.66 4.56; 4.56 40.02 ] ,
and
⟨ S_v, L ⟩ = (32.31, 57.31) , Σ_S_v,L^ Miller =
[ 0.64 4.43; 4.43 40.43 ] .
We observe that the astrophysics constraints move the posterior distributions to larger S_v and L values within the prior range. Moreover, all MBPT calculations for the different chiral NN+3N Hamiltonians are still largely within the posterior range, but some of them only borderline. This points to that astrophysics prefers EOSs on the stiffer part of the neutron matter EOS band based on chiral EFT. This is consistent with the EOS findings in Ref. <cit.>.
In Fig. <ref> we also show the GP-B results at N^3LO from Ref. <cit.>. Since the GP-B contours are based on the same N^3LO 500 (450) MeV results <cit.> included in our analysis, we can trace the difference between the GP-B countours and the N^3LO points to the evaluation of S_v and L for the correlated range of 95% of the calculated saturation density, while our distributions are at a fixed reference saturation density n_0=0.16 fm^-3. Since the L parameter scales linearly with the density, this mainly affects the L value, while the range of symmetry energies is broadened due to the additional uncertainty in the calculated saturation density.
Finally, we compare our 95% posterior distributions in Fig. <ref> with the recent results from Essick et al. <cit.>, which are however 90% contours. These are based on a different set of chiral NN+3N calculations and astrophysics constraints through a more general Gaussian process extension to high densities. Nevertheless both contours (at the same reference saturation density n_0) are remarkably consistent.
§.§ Proton fraction
The ground state of neutron star matter is obtained by solving the condition for beta equilibrium,
μ_n = μ_p + μ_e ,
where the neutron, proton, and electron chemical potentials μ_n, μ_p, and μ_e are given by
μ_n = ε/ n_n, μ_p = ε/ n_p, μ_e = ε/ n_e ,
with total energy density ε. Since the core is composed of uniform nuclear matter, Eq. (<ref>) is straightforward for a given EDF. For the crust EOS, where matter exists in inhomogeneous form, we employ the liquid drop model (LDM) <cit.> using the same EDF to construct the EOSs of the inner and outer crust.
In the inner crust, the total energy density including the electron contribution is given by <cit.>
ε = u n_i f_i + σ(x_i)u d/r_N
+ 2π (e x_i n_i r_N)^2 u f_d(u)
+ (1-u)n_nof_no + ε_e ,
where u is the volume fraction of the nucleus to the Wigner-Seitz cell,
n_i is the baryon number density of the heavy nucleus,
n_no is the density of unbound neutrons,
x_i is the proton fraction in the heavy nucleus,
f_i =f(n_i,x_i) and f_no=f(n_no,x_no=0) are the energy per baryon for the heavy nucleus and unbound neutrons, respectively.
σ(x_i) is the surface tension at zero temperature as a function of the proton fraction in heavy nuclei, r_N the radius of the heavy nucleus, e the electric charge, d the dimension of the nuclear pasta phase, f_d(u) the Coulomb shape function corresponding to the nuclear pasta phase, and ε_e is the electron energy density.
We use the surface tension from <cit.>
σ(x_i) = σ_0 2^α+1+ q/x^-α + q + (1-x)^-α ,
where σ_0, α, and q are parameters fit to the calculation of the surface tension. In this work, we use σ_0= 1.14 MeV fm^-2, α=3.4, and q=30, but note that the crust properties depend only weakly on the surface tension parameters, and also the impact of the crust on the investigated neutron star properties is minor.
Based on the viral theorem, the Coulomb energy is approximately twice the nuclear surface energy. Thus, we can combine the surface and Coulomb energy to a single form of energy contribution, which leads to a simpler equation for the energy density <cit.>
ε = u n_i f_i + (243π/5e^2 x_i^2 n_i^2 σ^2(x_i) )^1/3𝒟(u)
+ (1-u)n_nof_no + ε_e ,
where 𝒟(u) is a continuous dimension function introduced in Ref. <cit.>. For total baryon density and proton fraction Y_p, and thus electron density n_e = Y_p n, the conditions u, n_i, x_i, and n_no are found by minimizing the total energy density, Eq. (<ref>), using the Lagrange multiplier method for the constraints of baryon density and charge neutrality,
n = u n_i + (1-u) n_no and n_e = (1-u) n_i x_i .
For a outer crust EOS, which is defined as the region without unbound neutrons, the outside neutron density n_no is neglected. Using the LDM construction, the transitions from the outer to inner crust and to the outer core are thus smooth, since the same EDF is employed to construct the entire neutron star EOS.
Figure <ref> shows the average proton fraction at the central density ⟨ Y_p^c ⟩ based on the EDF EOS ensemble for the k_ F expansion and d=3, as well as the variance over the average σ_Y_p^c / ⟨ Y_p^c ⟩. The average proton fraction is dominated by the core, but include the details of the crust calculation discussed above. We note that in Fig. <ref> (and in Figs. <ref> and <ref>),
the mass and radius domain is restricted to the region where the relative probability to the maximum probability P(M,R)/P_ max≥ 10^-2 (as in Fig. <ref>). As expected, the proton fraction increases as the mass increases, and for a given mass, it increases with radius as the EOS becomes stiffer. Our EOS ensemble assumes for the proton fraction that matter is nucleonic, which may not be valid for massive stars. However, for typical 1.4 neutron stars, this may not be such a large extrapolation.
In addition, we plot in Fig. <ref> the threshold Y_p = 1/9 for direct URCA process, which leads to fast cooling neutron stars <cit.>. We find that typical neutron stars around 1.4 do not exceed this threshold for radii around 12 km, but only in our largest radius configurations. However, based on our results, we expect that massive neutron stars with M>2.1 would cool via the direct Urca process.
Figure <ref> shows the total proton fraction Y_p^ tot of the maximum mass star versus the maximum mass. The total proton fraction increases along a band as the maximum mass increases, due to the stiffer EOS. Figure <ref> shows results for four different Q values of symmetric nuclear matter, keeping in mind that negative Q values are favored by nuclear masses, ab initio calculations, and astrophysics <cit.>.
With increasing Q, the total proton fraction for a given mass decreases and also the maximum mass increases, as larger Q stiffens the EOS.
Naturally, the sensitivity to Q is much less pronounced for typical neutron stars.
Figure <ref> shows the proton fraction at the central density Y_p^c versus the radius of a 1.4 star, which exhibits a tight correlation and is only very weakly dependent on Q. Larger radii thus have a larger proton fraction. Again we see that radii around 12 km, as expected based on most recent EOS astrophysical inferences <cit.>,
do not cool via the direct Urca threshold. However for larger radii R_1.4 > 12.6 km (for Q=-300 MeV) even typical neutron stars would be fast coolers.
§.§ Central density and speed of sound
Next, we study the posterior distribution for the central density and the speed of sound in neutron stars. Figure <ref> shows the average central density in units of saturation density ⟨ n_c/n_0 ⟩ and its variance over the average σ_n_c / ⟨ n_c ⟩. The average central density increases with increasing mass, while it decreases for as the radius increases for a given mass of neutron star. This results from stiffer EOSs leading to larger radii. In our EDF EOSs, the maximal central density reaches up to ≈ 7 n_0, which is reached for softer EOSs in the most massive neutron stars with smaller radii.
Figure <ref> shows the speed of sound squared c_s^2 = ∂ P/∂ε at the central densities in neutron stars. In our EDFs, the speed of sound increases but remains causal and decreases at high density <cit.>. As we see from Fig. <ref>, the speed of sound is increasing as the mass increases, so in neutron stars most matter is on the part of the EOS that has an increasing c_s^2 in our ensemble of EOSs. In Fig. <ref>, the red dashed line represent c_s^2=1/3, which shows that even typical 1.4 stars exceed the conformal limit, except when they have radii larger than 13 km (see also the middle panel of Fig. <ref>). Moreover, information on the radii of massive stars with M ≳ 2.0 would inform us about c_s^2 at the central density (see also Fig. <ref>). This could be realized with an improved NICER radius measurement <cit.> of the 2.08 ± 0.07 pulsar PSR J0740+6620 <cit.>.
§.§ Correlations
Finally, we study the correlation of neutron star radii with the pressure and the speed of sound. In Ref. <cit.> it was suggested that the radius of 1.4 neutron star would follow the emprical relation R_1.4∼ p_2n_0^1/4, where p_2n_0 is the pressure at twice saturation density. In the top panel of Fig. <ref> we show that this correlation is indeed fulfilled in our EDF EOS ensemble within a band. For the radius in km and the pressure in MeV fm^-3, we find R_1.4 = 0.731 + 5.312 P_2n_0^1/4 for the mean line of the correlation shown in Fig. <ref>, with a correlation coefficient r_xy = 0.980. While the details of this correlation depend on the EOS model, this indicates that astrophysical observations of neutron star radii provide constraints for the pressure at twice saturation density.
The middle panel of Fig. <ref> shows the distribution of R_1.4 versus the speed of sound at the central density of neutron stars. Most of the distribution follows a linear trend, but the correlation coefficient r_xy=-0.870 is weaker in this case. We also observe that c_s^2 at the central density exceeds the conformal limit c_s^2 =1/3 in our EDF EOS ensemble for R_1.4 smaller than 12.8 km The correlation is even weaker at lower densities when comparing R_1.4 with the L parameter in the bottom panel fo Fig. <ref>, which is proportional to the pressure of pure neutron matter at saturation density. This is as expected because the central densities of a 1.4 neutron star is ∼ 3 n_0. Nevertheless, there is a general trend that R_1.4 increases as L increases.
Figure <ref> shows the correlation of the radius of a 2.0 neutron star with the speed of sound at the central density. The strong correlation indicates that the radius measurement of massive neutron stars provides constraints for the speed of sound in dense nuclear matter. For the radius in km, we find R_2.0 = 16.493 - 7.846 c_s^2, with a correlation coefficient r_xy=-0.995. Moreover, we find within our EDF EOS ensemble that the speed of sound at the central density of 2.0 stars is always greater than the conformal limit.
Figure <ref> shows the mass and radius prior when we impose the conformal limit for the speed of sound. The top panel shows the case when the speed of sound continues to increase up to 1/3 and maintains the conformal limit for all higher densities. The bottom panel is for the case where the speed of sound jumps to 1/3 at n=2n_0 and remains at the conformal limit for all higher densities. In both scenarios, the speed of sound is not larger than the conformal limit at any density. From Fig. <ref>, the prior probability to support 2.0 stars is around 10% or less, which is similar to the findings of Ref. <cit.>. Thus, the conformal limit can be consistent with 2.0 stars, but most of the support of our EDF EOS ensemble exceeds the conformal limit for massive neutron stars. However, when we take the maximum mass limit as the central value of PSR J0740+6620, 2.08, the speed of sounds needs to exceed 1/3 in our ensemble, as the maximum mass does not reach up to 2.08 in our modelling in both cases in Fig. <ref>.
§ SUMMARY AND CONCLUSION
We have explored EOS ensembles using new EDFs from Ref. <cit.> that allow for large variations at high densities. The EDF EOS ensembles were constrained by empirical properties of symmetric nuclear matter and by MBPT calculations of neutron matter based on different chiral NN+3N Hamiltonians. Starting from this prior, constraints at high densities were included from observations of GW170817 and NICER, where the heavy neutron star mass constraint is incorporated through PSR J0740+6620. All our results show that both Riley et al. <cit.> and Miller et al. <cit.> NICER analyses lead to very similar posterior constraints for the symmetry energy and neutron star properties when folded into our EOS framework.
Based on our EDF EOS ensembles, we have studied the symmetry energy and the L parameter, as well as for the proton fraction, the speed of sound, and the central density in neutron stars. Our 95% posterior credibility ranges for the symmetry energy S_v, the L parameter, and the radius of a 1.4 neutron star R_1.4 are S_v=(30.6-33.9) MeV, L=(43.7-70.0) MeV, and R_1.4=(11.6-13.2) km. Moreover, we have shown that larger and-or heavier neutron stars have a larger proton fraction and are thus more likely to cool rapidly via the direct Urca process.
As can be seen from our results for S_v and L, present astrophysics constraints prefer larger pressures within the prior ranges. To this end, we have also explored correlations of neutron star radii with the pressure and the speed of sound. The radius of 1.4 stars was found to correlate well with the pressure at twice saturation density, and R_2.0 was shown to correlate tightly with the speed of sound at the central density. Therefore, precise measurements of R_1.4 provide key information for density regimes at the limits of chiral EFT calculations, and radii of massive neutron stars will help to constrain the behavior of the speed of sound in dense matter. Finally, by constructing EOS ensembles with imposed conformal limit on the speed of sound, we found that a maximum mass of neutron stars M_ max>2.1 indicates that the speed of sound needs to exceed the conformal limit.
We thank Sabrina Huth for fruitful discussion. This work was supported by the Max Planck Society, the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No. 101020842) and by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIT) (No. 2021R1A2C2094378).
55
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Lattimer and Lim(2013)]LattimerLim
author author J. M. Lattimer and author Y. Lim, @noop journal journal Astrophys. J. volume 771, pages 51 (year
2013)NoStop
[Drischler et al.(2021)Drischler, Holt, and Wellenhofer]Dris21ARNPS
author author C. Drischler, author J. W. Holt,
and author C. Wellenhofer, @noop journal journal Annu. Rev. Nucl.
Part. Sci. volume 71, pages 403
(year 2021)NoStop
[Huth et al.(2021)Huth,
Wellenhofer, and Schwenk]Huth21
author author S. Huth, author C. Wellenhofer, and author A. Schwenk, @noop journal journal Phys. Rev. C volume 103, pages 025803 (year 2021)NoStop
[Essick et al.(2021a)Essick, Landry,
Schwenk, and Tews]Essick21PRC
author author R. Essick, author P. Landry,
author A. Schwenk, and author I. Tews, @noop
journal journal Phys. Rev. C volume 104, pages 065804 (year
2021a)NoStop
[Hebeler and Schwenk(2010)]Hebe10nmatt
author author K. Hebeler and author A. Schwenk, @noop journal journal Phys.
Rev. C volume 82, pages 014314
(year 2010)NoStop
[Tews et al.(2013)Tews,
Krüger, Hebeler, and Schwenk]Tews13N3LO
author author I. Tews, author T. Krüger,
author K. Hebeler, and author A. Schwenk, @noop
journal journal Phys. Rev. Lett. volume 110, pages 032504 (year
2013)NoStop
[Carbone et al.(2013)Carbone, Polls, and Rios]Carb13nm
author author A. Carbone, author A. Polls, and author A. Rios, @noop
journal journal Phys. Rev. C volume 88, pages 044302 (year
2013)NoStop
[Hagen et al.(2014)Hagen,
Papenbrock, Ekström, Wendt, Baardsen, Gandolfi, Hjorth-Jensen, and Horowitz]Hage14ccnm
author author G. Hagen, author T. Papenbrock,
author A. Ekström, author K. Wendt, author
G. Baardsen, author
S. Gandolfi, author
M. Hjorth-Jensen, and author
C. J. Horowitz, @noop journal journal Phys. Rev. C volume
89, pages 014319 (year 2014)NoStop
[Lynn et al.(2016)Lynn,
Tews, Carlson, Gandolfi,
Gezerlis, Schmidt, and Schwenk]Lynn16QMC3N
author author J. E. Lynn, author I. Tews, author J. Carlson, author
S. Gandolfi, author
A. Gezerlis, author
K. E. Schmidt, and author
A. Schwenk, @noop journal journal Phys. Rev. Lett. volume 116, pages 062501 (year
2016)NoStop
[Holt and Kaiser(2017)]Holt17
author author J. W. Holt and author N. Kaiser, @noop journal journal Phys. Rev. C volume 95, pages 034326 (year 2017)NoStop
[Drischler et al.(2019)Drischler, Hebeler, and Schwenk]Dris19MCshort
author author C. Drischler, author K. Hebeler,
and author A. Schwenk, @noop journal journal Phys. Rev. Lett. volume 122, pages 042501 (year 2019)NoStop
[Jiang et al.(2020)Jiang,
Ekström, Forssén, Hagen,
Jansen, and Papenbrock]Jiang20
author author W. G. Jiang, author A. Ekström,
author C. Forssén, author G. Hagen, author
G. R. Jansen, and author
T. Papenbrock, @noop journal journal Phys. Rev. C volume
102, pages 054301 (year 2020)NoStop
[Keller et al.(2023)Keller,
Hebeler, and Schwenk]Kell23ANM
author author J. Keller, author K. Hebeler, and author A. Schwenk, @noop journal journal Phys. Rev. Lett. volume 130, pages 072701 (year 2023)NoStop
[Hebeler et al.(2013)Hebeler, Lattimer, Pethick, and Schwenk]Hebe13ApJ
author author K. Hebeler, author J. M. Lattimer, author C. J. Pethick, and author A. Schwenk, @noop journal journal
Astrophys. J. volume 773, pages 11
(year 2013)NoStop
[Tews et al.(2018)Tews,
Carlson, Gandolfi, and Reddy]Tews18cs
author author I. Tews, author J. Carlson,
author S. Gandolfi, and author S. Reddy, @noop
journal journal Astrophys. J. volume 860, pages 149 (year
2018)NoStop
[Greif et al.(2019)Greif,
Raaijmakers, Hebeler, Schwenk, and Watts]Greif19cs
author author S. K. Greif, author G. Raaijmakers,
author K. Hebeler, author A. Schwenk, and author A. L. Watts, @noop
journal journal Mon. Not. Roy. Astron. Soc. volume 485, pages 5363 (year
2019)NoStop
[Landry and Essick(2019)]Landry19GP
author author P. Landry and author R. Essick, @noop journal journal Phys.
Rev. D volume 99, pages 084049
(year 2019)NoStop
[Lim and Holt(2018)]Lim18
author author Y. Lim and author J. W. Holt, @noop journal journal Phys. Rev. Lett. volume 121, pages 062701 (year 2018)NoStop
[Demorest et al.(2010)Demorest, Pennucci, Ransom, Roberts, and Hessels]Demo10ns
author author P. Demorest, author T. Pennucci,
author S. Ransom, author M. Roberts, and author J. Hessels, @noop
journal journal Nature volume 467, pages 1081 (year
2010)NoStop
[Antoniadis et al.(2013)Antoniadis, Freire, Wex, Tauris, Lynch, van Kerkwijk, Kramer, Bassa, Dhillon, Driebe et al.]Anto13ns
author author J. Antoniadis, author P. C. C. Freire, author N. Wex,
author T. M. Tauris, author R. S. Lynch, author
M. H. van Kerkwijk, author
M. Kramer, author C. Bassa, author V. S. Dhillon, author T. Driebe, et al., @noop journal journal Science volume 340, pages
1233232 (year 2013)NoStop
[Fonseca et al.(2021)Fonseca, Cromartie, Pennucci, Ray, Kirichenko, Ransom, Demorest, Stairs, Arzoumanian,
Guillemot et al.]Fonseca21
author author E. Fonseca, author H. T. Cromartie, author T. T. Pennucci, author P. S. Ray,
author A. Y. Kirichenko,
author S. M. Ransom, author P. B. Demorest, author
I. H. Stairs, author
Z. Arzoumanian, author
L. Guillemot, et al., @noop
journal journal Astrophys. J. Lett. volume 915, pages L12 (year
2021)NoStop
[Riley et al.(2021)Riley,
Watts, Ray, Bogdanov,
Guillot, Morsink, Bilous,
Arzoumanian, Choudhury, Deneva et al.]Riley21
author author T. E. Riley, author A. L. Watts,
author P. S. Ray, author S. Bogdanov, author
S. Guillot, author S. M. Morsink, author A. V. Bilous, author Z. Arzoumanian, author D. Choudhury, author J. S. Deneva, et al., @noop journal journal Astrophys. J. Lett. volume 918, pages L27 (year 2021)NoStop
[Miller et al.(2021)Miller,
Lamb, Dittmann, Bogdanov,
Arzoumanian, Gendreau, Guillot, Ho, Lattimer, Loewenstein et al.]Miller21
author author M. C. Miller, author F. K. Lamb,
author A. J. Dittmann, author S. Bogdanov, author
Z. Arzoumanian, author
K. C. Gendreau, author
S. Guillot, author W. C. G. Ho, author J. M. Lattimer, author M. Loewenstein, et al., @noop journal
journal Astrophys. J. Lett. volume
918, pages L28 (year 2021)NoStop
[Riley et al.(2019)Riley,
Watts, Bogdanov, Ray,
Ludlam, Guillot, Arzoumanian,
Baker, Bilous, Chakrabarty
et al.]Riley19
author author T. E. Riley, author A. L. Watts,
author S. Bogdanov, author P. S. Ray, author
R. M. Ludlam, author
S. Guillot, author Z. Arzoumanian, author C. L. Baker, author A. V. Bilous, author D. Chakrabarty,
et al., @noop journal journal
Astrophys. J. Lett. volume 887, pages
L21 (year 2019)NoStop
[Miller et al.(2019)Miller,
Lamb, Dittmann, Bogdanov,
Arzoumanian, Gendreau, Guillot, Harding, Ho, Lattimer et al.]Miller19
author author M. C. Miller, author F. K. Lamb,
author A. J. Dittmann, author S. Bogdanov, author
Z. Arzoumanian, author
K. C. Gendreau, author
S. Guillot, author A. K. Harding, author W. C. G. Ho, author J. M. Lattimer, et al., @noop journal journal Astrophys. J. Lett. volume 887, pages L24 (year
2019)NoStop
[Abbott et al.(2019)Abbott
et al.]LIGO19PRX
author author B. P. Abbott et al. (collaboration LIGO Scientific
Collaboration and Virgo Collaboration), @noop journal journal Phys. Rev. X volume
9, pages 011001 (year 2019)NoStop
[Tolman(1939)]Tolm39TOV
author author R. C. Tolman, @noop journal journal Phys.
Rev. volume 55, pages 364 (year 1939)NoStop
[Oppenheimer and Volkoff(1939)]Oppe39TOV
author author J. R. Oppenheimer and author G. M. Volkoff, @noop journal journal Phys.
Rev. volume 55, pages 374 (year 1939)NoStop
[Hebeler et al.(2015)Hebeler, Holt, Menéndez, and Schwenk]Hebe15ARNPS
author author K. Hebeler, author J. D. Holt,
author J. Menéndez, and author A. Schwenk, @noop
journal journal Annu. Rev. Nucl. Part. Sci. volume 65, pages 457 (year
2015)NoStop
[Gezerlis et al.(2013)Gezerlis, Tews, Epelbaum, Gandolfi, Hebeler, Nogga, and Schwenk]Geze13QMCchi
author author A. Gezerlis, author I. Tews,
author E. Epelbaum, author S. Gandolfi, author
K. Hebeler, author A. Nogga, and author A. Schwenk, @noop journal journal Phys. Rev. Lett. volume 111, pages 032501 (year 2013)NoStop
[Drischler et al.(2020)Drischler, Furnstahl, Melendez, and Phillips]Dris20PRL
author author C. Drischler, author R. J. Furnstahl, author J. A. Melendez, and author D. R. Phillips, @noop journal journal
Phys. Rev. Lett. volume 125, pages
202702 (year 2020)NoStop
[Drischler et al.(2016)Drischler, Hebeler, and Schwenk]Dris16asym
author author C. Drischler, author K. Hebeler,
and author A. Schwenk, @noop journal journal Phys. Rev. C volume 93, pages 054314 (year 2016)NoStop
[Somasundaram et al.(2021)Somasundaram, Drischler, Tews, and Margueron]Somasundaram21
author author R. Somasundaram, author C. Drischler, author I. Tews, and author J. Margueron, @noop journal journal Phys. Rev. C volume 103, pages 045803 (year 2021)NoStop
[Tews et al.(2017)Tews,
Lattimer, Ohnishi, and Kolomeitsev]Tews17
author author I. Tews, author J. M. Lattimer,
author A. Ohnishi, and author E. E. Kolomeitsev, @noop journal journal Astrophys. J. volume 848, pages 105 (year
2017)NoStop
[Hebeler et al.(2011)Hebeler, Bogner, Furnstahl, Nogga, and Schwenk]Hebe11fits
author author K. Hebeler, author S. K. Bogner,
author R. J. Furnstahl, author A. Nogga, and author
A. Schwenk, @noop journal journal Phys. Rev. C volume
83, pages 031301(R) (year 2011)NoStop
[Carlsson et al.(2016)Carlsson, Ekström, Forssén,
Strömberg, Jansen, Lilja,
Lindby, Mattsson, and Wendt]Carl15sim
author author B. D. Carlsson, author A. Ekström, author C. Forssén, author D. F. Strömberg, author G. R. Jansen, author O. Lilja,
author M. Lindby, author B. A. Mattsson, and author K. A. Wendt, @noop
journal journal Phys. Rev. X volume 6, pages 011019 (year
2016)NoStop
[Entem et al.(2017)Entem,
Machleidt, and Nosyk]Ente17EMn4lo
author author D. R. Entem, author R. Machleidt, and author Y. Nosyk, @noop journal journal Phys. Rev. C volume 96, pages 024004 (year 2017)NoStop
[Dutra et al.(2012)Dutra,
Sa Martins, Delfino, Stone, and Stevenson]Dutra12PRC
author author M. Dutra, author J. S. Sa
Martins, author A. Delfino,
author J. R. Stone, and author P. D. Stevenson, @noop journal journal Phys. Rev. C volume 85, pages 035201 (year 2012)NoStop
[Lim and Holt(2019)]Lim19
author author Y. Lim and author J. W. Holt, @noop journal journal Eur. Phys. J. A volume 55, pages 209 (year
2019)NoStop
[Lim et al.(2021)Lim,
Bhattacharya, Holt, and Pati]Lim2020n
author author Y. Lim, author A. Bhattacharya,
author J. W. Holt, and author D. Pati, @noop
journal journal Phys. Rev. C volume 104, pages L032802 (year
2021)NoStop
[Lim and Holt(2022)]Lim2022f
author author Y. Lim and author J. W. Holt, @noop journal journal Galaxies volume 10, pages 99 (year
2022)NoStop
[Raaijmakers et al.(2021)Raaijmakers, Greif, Hebeler, Hinderer, Nissanke, Schwenk, Riley, Watts, Lattimer, and Ho]Raaijmakers21
author author G. Raaijmakers, author S. K. Greif, author K. Hebeler,
author T. Hinderer, author S. Nissanke, author
A. Schwenk, author T. E. Riley, author A. L. Watts, author J. M. Lattimer, and author W. C. G. Ho, @noop journal journal Astrophys.
J. Lett. volume 918, pages L29
(year 2021)NoStop
[Lim and Holt(2017)]Lim17
author author Y. Lim and author J. W. Holt, @noop journal journal Phys. Rev. C volume 95, pages 065805 (year 2017)NoStop
[Ravenhall et al.(1983)Ravenhall, Pethick, and Lattimer]RPL1983
author author D. G. Ravenhall, author C. J. Pethick, and author J. M. Lattimer, @noop journal journal
Nucl. Phys. A volume 407, pages 571
(year 1983)NoStop
[Lattimer and Swesty(1991)]LSEOS
author author J. M. Lattimer and author F. D. Swesty, @noop journal journal Nucl.
Phys. A volume 535, pages 331
(year 1991)NoStop
[Lattimer et al.(1991)Lattimer, Pethick, Prakash, and Haensel]Lattimer91
author author J. M. Lattimer, author C. J. Pethick, author M. Prakash, and author P. Haensel, @noop journal journal Phys. Rev. Lett. volume 66, pages 2701 (year
1991)NoStop
[Capano et al.(2020)Capano,
Tews, Brown, Margalit,
De, Kumar, Brown,
Krishnan, and Reddy]Capano20
author author C. D. Capano, author I. Tews,
author S. M. Brown, author B. Margalit, author
S. De, author S. Kumar, author D. A. Brown, author B. Krishnan, and author S. Reddy, @noop journal journal Nature
Astronomy volume 4, pages 625
(year 2020)NoStop
[Al-Mamun et al.(2021)Al-Mamun, Steiner, Nättilä,
Lange, O'Shaughnessy, Tews,
Gandolfi, Heinke, and Han]Al-Mamun21
author author M. Al-Mamun, author A. W. Steiner, author J. Nättilä, author J. Lange,
author R. O'Shaughnessy, author I. Tews, author
S. Gandolfi, author
C. Heinke, and author
S. Han, @noop journal journal Phys. Rev. Lett. volume 126, pages 061101 (year
2021)NoStop
[Essick et al.(2021b)Essick, Tews,
Landry, and Schwenk]Essick21
author author R. Essick, author I. Tews,
author P. Landry, and author A. Schwenk, @noop
journal journal Phys. Rev. Lett. volume 127, pages 192701 (year
2021b)NoStop
[Huth et al.(2022)Huth,
Pang, Tews, Dietrich,
Le Févre, Schwenk, Trautmann, Agarwal, Bulla, Coughlin, and Van Den Broeck]Huth22
author author S. Huth, author P. T. H. Pang,
author I. Tews, author
T. Dietrich, author
A. Le Févre, author
A. Schwenk, author W. Trautmann, author K. Agarwal, author M. Bulla, author M. W. Coughlin, and author C. Van Den Broeck, @noop journal
journal Nature volume 606, pages 276 (year 2022)NoStop
[Annala et al.(2022)Annala,
Gorda, Katerini, Kurkela,
Nättilä, Paschalidis, and Vuorinen]Annala22
author author E. Annala, author T. Gorda,
author E. Katerini, author A. Kurkela, author
J. Nättilä, author
V. Paschalidis, and author
A. Vuorinen, @noop journal journal Phys. Rev. X volume
12, pages 011058 (year 2022)NoStop
[Altiparmak et al.(2022)Altiparmak, Ecker, and Rezzolla]Altiparmak22
author author S. Altiparmak, author C. Ecker, and author L. Rezzolla, @noop journal journal Astrophys. J.
Lett. volume 939, pages L34 (year 2022)NoStop
[Gorda et al.(2022)Gorda,
Komoltsev, and Kurkela]Gorda22
author author T. Gorda, author O. Komoltsev, and author A. Kurkela, @noop (year 2022), http://arxiv.org/abs/2204.11877 arXiv:2204.11877 NoStop
[Lattimer and Prakash(2007)]Lattimer06
author author J. M. Lattimer and author M. Prakash, @noop journal journal Phys.
Rept. volume 442, pages 109 (year 2007)NoStop
[Bedaque and Steiner(2015)]Bedaque15
author author P. Bedaque and author A. W. Steiner, @noop journal journal Phys.
Rev. Lett. volume 114, pages 031103
(year 2015)NoStop
|
http://arxiv.org/abs/2307.05719v1 | 20230710131845 | Systemic risk indicator based on implied and realized volatility | [
"Paweł Sakowski",
"Rafał Sieradzki",
"Robert Ślepaczuk"
] | q-fin.RM | [
"q-fin.RM"
] |
WNEUW]Paweł Sakowski
1
[email protected]
NYU]Rafał Sieradzki
2
[email protected]
WNEUW]Robert Ślepaczuk
cor1
3, 4, 5
[email protected]
[WNEUW]Quantitative Finance Research Group, Department of Quantitative Finance, University of Warsaw, Faculty of Economic Sciences, ul. Dluga 44-50, 00-241, Warsaw, Poland
[NYU]New York University Stern School of Business; Cracow University of Economics
[cor1]Corresponding author
[1]ORCID: https://orcid.org/0000-0003-3384-3795
[2]ORCID: https://orcid.org/0000-0002-4702-7716
[3]ORCID: https://orcid.org/0000-0001-5227-2014
[4]This document is the result of the research project funded by the IDUB program BOB-IDUB-622-187/2022 at the University of Warsaw
[5]We want to thank Linda Allen from Baruch College for providing us with CATFIN data, and Viral Acharya and Rob Capellini from the NYU Stern School of Business and NYU Stern Volatility and Risk Institute for sending us the sRisk data on a single constituent level. We have benefited from discussions with Tobias Adrian, Markus Brunnenmeier, Ruggero Japelli, Yi Cao, Zexun Chen. We would also like to thank participants of the 49th Eastern Economic Conference in New York (February 2023), the 33rd Quantitative Finance Research Group and Data Science Lab Research Seminar at the University of Warsaw (April 2023), the MSBE research seminar at the University of Edinburgh, School of Business (April 2023), the 43rd International Symposium on Forecasting at the University of Virginia Darden School of Business, Charlottesville, VA, USA (June 2023), and the 16th edition of the International Risk Management Conference Florence, Italy (July 2023) for their inspiring comments and insightful discussions.
We propose a new measure of systemic risk to analyze the impact of the major financial market turmoils in the stock markets from 2000 to 2023 in the USA, Europe, Brazil, and Japan. Our Implied Volatility Realized Volatility Systemic Risk Indicator (IVRVSRI) shows that the reaction of stock markets varies across different geographical locations and the persistence of the shocks depends on the historical volatility and long-term average volatility level in a given market. The methodology applied is based on the logic that the simpler is always better than the more complex if it leads to the same results. Such an approach significantly limits model risk and substantially decreases computational burden. Robustness checks show that IVRVSRI is a precise and valid measure of the current systemic risk in the stock markets. Moreover, it can be used for other types of assets and high-frequency data. The forecasting ability of various SRIs (including CATFIN, CISS, IVRVSRI, SRISK, and Cleveland FED) with regard to weekly returns of S&P 500 index is evaluated based on the simple linear, quasi-quantile, and quantile regressions. We show that IVRVSRI has the strongest predicting power among them.
systemic risk implied volatility realized volatility volatility indices equity index options market volatility JEL: G14, G15, C61, C22
introduction
§ INTRODUCTION
The magnitude and the speed of the contagion of the financial market turmoils is the main point of interest in numerous studies. This topic is of special importance because the reactions of the financial markets to any existing or forthcoming crisis are fast, and it is hard to identify them on time based on the real economic measures, as they are announced with a delay. The main aim of this paper is to analyze and compare the systemic impact of the major financial market turmoils in the equity markets in the USA, Europe, Brazil, and Japan from 2000 to 2023. For this purpose, we construct an indicator based on implied and realized volatility measures (IV and RV, respectively) for each market, which are easily available to all market participants. Moreover, we construct a general indicator at the worldwide level. Our partial motivation to undertake this study is to show that such Systemic Risk Indicators can be constructed from simple metrics, and there is no need to use any sophisticated risk models for this purpose (<cit.>). In other words, we want to show that the model risk can be significantly reduced while the results are similar to the ones obtained by the use of much more complex tools. We set four research hypotheses:
* RH1: It is possible to construct a robust Systemic Risk Indicator based on the well-known concepts of realized and implied volatility measures.
* RH2: The indication of the proposed Systemic Risk Indicator depends on the geographical location of a given equity market.
* RH3: The robustness of the proposed Systemic Risk Indicator depends on various parameters selected: the memory parameter for RV, time to expiration for IV, the percentile selected for the risk map, the length of the history selected for the calculation of percentile in case of risk map.
* RH4 IVRVSRI has the highest forecasting ability of S&P500 index amongst other benchmark SRIs, especially in the moments of systemic risk
The robustness of proposed systemic risk measure is particularly important, as in many studies (e.g. <cit.>, <cit.>) researchers do not consider extent to which the initial parameters of the model affect the final results, especially those regarding the speed of reaction to unexpected market turmoils. We check the sensitivity of the proposed Systemic Risk Indicator to the change of the selected parameters like: the memory parameter for the realized volatility (RV), time to expiration for the implied volatility (IV), the percentile selected for the risk map, and the length of the history selected for the calculation of percentile in case of the risk map.
Systemic risk refers to the risk of the collapse of an entire financial system, as opposed to the risk associated with any individual entity, which is a credit default risk. One of the main distinguishing features of systemic risk is that an idiosyncratic event affecting one or a group of market entities is exacerbated by interlinkages and interdependencies in a system, leading to a domino effect that can potentially bring down the entire system. The fragility of the system as a whole is being built over time, and it “only” materializes in times of crisis. One reason is a comparable business model among market participants that leads to accumulating similar assets on a balance sheet and comparable investment strategies that resemble herd behavior. In fact, those entities, although legally separated, can be considered as one large entity from a systemic point of view.
The recent collapses of Silicon Valley Bank and Signature Bank, and the abrupt nature of the Covid-19 pandemic, suggest the timeliness of the indicators is one of the key characteristics of a systemic risk indicator. One may claim that relying on the measures primarily based on the market variables, basically the prices of the financial instruments, may be potentially misleading as they may generate false positive signals. On the other hand, measures that are based primarily on accounting-based data are slow in reacting to potential problems in the financial system, as they are available with a delay. Therefore, we argue that it is better to get a signal of a potential problem that sometimes may be false that to get it when the crisis has already started. Most of the research recognizes that problem and tries to combine market and accounting data (See the literature review for details).
In general, the function of the market-based variables is to detect potential crises in a timely manner, and the accounting-based measures serve to identify systematically important institutions, whose collapse may create spill-over effects in the system. This approach may be applied by the market regulators that can monitor more closely entities that are systematically important, and try to introduce regulatory solutions to lower their impact on the system. These entities also have access to more data on individual institutions and also collect them earlier than market participants. Although, we agree that this approach is a good way to identify the “too-big-to-fail” institutions. On the other hand, we argue that one of the drawbacks of this approach is that combining both types of data leads to higher model risk and is computationally more intensive than using only market-based variables. Moreover, it seems to be hard to detect the interconnectedness in the market due to its complex and dynamic nature and therefore classifying “too-interconnected-to-fail” is a difficult task[ Some systemic risk measures, like ΔCoVaR, try to capture the potential for the spreading of financial distress across institutions by gauging spill-over by observing the tail comovement using the VaR approach]. At the same time, the aforementioned collapses of the SVB and Signature Bank show that some risk built-up in the system we were not aware of and they were not taken into account by the existing models[Those banks had bonds on their balance sheets which were “held to maturity”. When the customers started to withdraw their deposits, banks had to sell those bonds in the market at much lower prices than they were reported on the balance sheet, leading to huge losses.].
In this work, we focus on one part of the systemic risks, which is the timely identification of the potential crisis by market participants and not only by the regulators. We also claim that a “wide market” may have superior knowledge about systemic risk, and to some extent, their coordinated actions can trigger a systemic event[There is anecdotal evidence that some depositors withdrew all their funds from the SVB two days before its collapse. It seems, that presumably the involuntarily coordinated action of a group of investors to take out their deposits from that bank in a short stretch of time was the trigger of the bank’s collapse. At the same time, one may assume that those investors who were very convinced that the SVB was going to have serious problems were short-selling its stocks or going long deep out-of-the-money options on its stocks, further exacerbating the problems of the bank and finally leading to its collapse.].
The structure of this paper is as follows. The second section presents a literature review. The third section describes Data and Methodology. The fourth section presents the Results, and the fifth one includes Conclusions.
literature-review-and-classification-the-selected-systemic-risk-indicators
§ LITERATURE REVIEW AND CLASSIFICATION THE SELECTED SYSTEMIC RISK INDICATORS
literature-review
§.§ Literature review
The major approach in the literature to measure systemic risk is based either on market data or a mix of market and balance sheet data. Those combined risk indicators use i.a. such metrics as VaR and CoVaR. The results obtained for one country, market segment, or economic sector are aggregated to get a general measure of systemic risk. In general, various methods yield similar results as in <cit.>, <cit.>, <cit.>, <cit.> or <cit.>.
One of the first attempts focusing on systemic risk was <cit.> who reminded the last resort lending function of the central bank, which has digressed from its overall strategy of monetary control to also undertake a tactical rescue of individual banks and segments of the financial market. <cit.> developed a broad concept of systemic risk, the basic economic concept for the understanding of financial crises. They claimed that any such concept must integrate systemic events in banking and financial markets as well as in the related payment and settlement systems. At the heart of systemic risk are contagion effects, and various forms of external effects. The concept also includes simultaneous financial instabilities following aggregate shocks. They surveyed the quantitative literature on systemic risk, which was evolving swiftly in the last couple of years.
<cit.> point out that systemic risk is a multifaceted problem in an ever-changing financial environment, any single definition is likely to fall short and may create a false sense of security as financial markets evolve in ways that escape the scrutiny of any one-dimensional perspective. They provide an overview of over 30 indicators of systemic risk in the literature, chosen to address key issues in measuring systemic risk and its management. The measures are grouped into six various categories including: macroeconomic, granular foundations and network, forward-looking risk, stress-test, cross-sectional, illiquidity and finally insolvency measures. They analyze them from the supervisory, research, and data perspectives, and present concise definitions of each risk measure. At the same time, they point out that the system to be evaluated is highly complex, and the metrics considered were largely untested outside the GFC crisis. Indeed, some of the conceptual frameworks that they reviewed were still in their infancy and had yet to be applied.
<cit.> agreed that governments and international organizations worried increasingly about systemic risk, under which the world's financial system could have collapsed like a row of dominoes. There is widespread confusion, though, about the causes and, to some extent, even the definition of systemic risk, and uncertainty about how to control it. His paper offers a conceptual framework for examining what risks are truly “systemic,” what causes those risks, and how, if at all, those risks should be regulated. Scholars historically have tended to think of systemic risk primarily in terms of financial institutions such as banks. However, with the growth of disintermediation, in which companies can access capital-market funding without going through banks or other intermediary institutions\footnoote{In the US more than 50% of funding of non-financial corporations comes from equity and bond issuance.}, greater focus should be devoted to financial markets and the relationship between markets and institutions. This perspective reveals that systemic risk results from a type of tragedy of the commons in which market participants lack sufficient incentives, and absence of the regulation to limit risk-taking in order to reduce the systemic danger to others.
In this light, <cit.> models systemic risk is modeled as the endogenously chosen correlation of returns on assets held by banks. The limited liability of banks and the presence of a negative externality of one bank's failure on the health of other banks give rise to a systemic risk-shifting incentive where all banks undertake correlated investments, thereby increasing economy-wide aggregate risk. Regulatory mechanisms such as bank closure policy and capital adequacy requirements that are commonly based only on a bank's own risk fail to mitigate aggregate risk-shifting incentives, and can, in fact, accentuate systemic risk. Prudential regulation is shown to operate at a collective level, regulating each bank as a function of both its joint (correlated) risk with other banks as well as its individual (bank-specific) risk.
<cit.> introduce SRISK to measure the systemic risk contribution of a financial firm. SRISK captures the capital shortfall of a firm conditional on a severe market decline and is a function of its size, leverage and risk. They use the measure to study the top financial institutions in the recent financial crisis. SRISK delivers useful rankings of systemic institutions at various stages of the crisis and identifies Fannie Mae, Freddie Mac, Morgan Stanley, Bear Stearns, and Lehman Brothers as the top contributors as early as 2005-Q1. Moreover, aggregate SRISK provides early warning signals of distress in indicators of real activity.
The ΔCoVaR method proposed by <cit.> estimates the systemic risk of a financial system conditional on institutions being in distress based on publicly traded financial institutions. They define an institution's contribution to systemic risk as the difference between ΔCoVaR conditional on the institution being in distress and ΔCoVaR in the median state of the institution. They quantify the extent to which characteristics such as leverage, size, and maturity mismatch predict systemic risk contribution.
<cit.> examine the aftermath of the postwar financial crises in advanced countries. Through the construction of a semiannual series of financial distress in 24 OECD countries for the period 1967–2012. The series is based on assessments of the health of countries' financial systems from and classifies financial distress on a relatively fine scale. They find that the average decline in output following a financial crisis is statistically significant and persistent, but only moderate in size. More importantly, the average decline is sensitive to the specification and sample, and that the aftermath of the crises is highly variable across major episodes. Following this research, <cit.>, using a crisis severity variable constructed by <cit.>, estimated a Tobit model for 23 developed economies. They developed a probability of crisis measure and SRISK capacity measure from the Tobit estimates. These indicators reveal an important global externality whereby the risk of a crisis in one country is strongly influenced by the undercapitalization of the rest of the world.
<cit.> present an economic model of systemic risk in which undercapitalization of the financial sector as a whole is assumed to harm the real economy, leading to a systemic risk externality. Each financial institution's contribution to systemic risk can be measured as its systemic expected shortfall (SES), that is, its propensity to be undercapitalized when the system as a whole is undercapitalized.
The research by <cit.> addresses the measurement of the systemic risk contribution (SRC) of country-level stock markets to understand the rise of extreme risks worldwide to prevent potential financial crises. The proposed measure of SRC is based on quantifying tail risk propagation's domino effect using CoVaR and the cascading failure network model. While CoVaR captures the tail dependency structure among stock markets, the cascading failure network model captures the nonlinear dynamic characteristics of tail risk contagion to mimic tail risk propagation. The validity test demonstrated that this method outperforms seven classic methods as it helps early warning of global financial crises and correlates to many systemic risk determinants, e.g., market liquidity, leverage, inflation. The results highlight that considering tail risk contagion's dynamic characteristics helps avoid underestimating SRC and supplement a “cascading impact” perspective to improve financial crisis prevention.
The micro-level methods have been criticized by <cit.>. They base their research on the assumption that financial intermediaries including commercial banks, savings banks, investment banks, broker/dealers, insurance companies, mutual funds, etc. are special because they are fundamental to the operation of the economy. The specialness of banks is reflected in the economic damage that results when financial firms fail to operate properly. They proposed a new measure to forecast the likelihood that systemic risk-taking in the banking system as a whole, called CATFIN. It captures the tail risk of the overall banking market using VaR methodology at a 1% level with monthly data. This early warning system should signal whether aggressive aggregate systemic risk-taking in the financial sector presages future macroeconomic declines. <cit.> showed that among 19 different risk measures, CATFIN performs the best in predicting macro-level shocks.
<cit.> introduced TALIS (TrAffic LIght System for Systemic Stress) that provides a comprehensive color-based classification for grouping companies according to both the stress reaction level of the system when the company is in distress and the company's stress. level. This indicator can integrate multiple signals from the interaction between different risk metrics. Starting from specific risk indicators, companies are classified by combining two loss functions, one for the system and one for each company, evaluated over time and as a cross-section. An aggregated index is also obtained from the color-based classification of companies.
<cit.> compare different approaches to Value-at-Risk measurement based on parametric and non-parametric approaches for different portfolios of assets, including cryptocurrencies. They checked if the analyzed models accurately estimate the Value-at-Risk measure, especially in the case of assets with various returns distribution characteristics (eg. low vs. high volatility, high vs. moderate skewness). <cit.> checked which of the VaR models should be used depending on the state of the market volatility. They showed that GARCH(1,1) with standardized student's t-distribution is least affected by changes in volatility among analysed models. <cit.> point out that under the conditions of sudden volatility increase, such as during the global economic crisis caused by the Covid-19 pandemic, no classical VaR model worked properly even for the group of the largest market indices. In general, there is an agreement between market risk researchers that an ideal model for VaR estimation does not exist, and different models' performance strongly depends on current economic circumstances.
Some spectacular crash events, including the FTX collapse in November 2022, followed by a dramatic slump in prices of most of the cryptocurrencies triggered a question about the resiliency of this financial market segment to shocks and the potential spillover effect. In one of the latest research, <cit.> studied systemic risk in the cryptocurrency market based on the FTX collapse. Using the CATFIN measure to proxy for the systemic risk they claimed that the FTX crisis did not engender higher systemic and liquidity risks in this market compared to previous negative shocks.
Various rigorous models of bank and payment system contagion have now been developed, although a general theoretical paradigm is still missing. Direct econometric tests of bank contagion effects seem to be mainly limited to the United States. Empirical studies of the systemic risk in foreign exchange and security settlement systems appear to be non-existent. Moreover, the literature surveyed reflects the general difficulty to develop empirical tests that can make a clear distinction between contagion in the proper sense and joint crises caused by common shocks, rational revisions of depositor or investor expectations when information is asymmetric (“information-based” contagion) and “pure” contagion as well as between “efficient” and “inefficient” systemic events.
Bearing in mind the huge dynamics of the recent shocks (e.g. the Covid-19 pandemic, and the FTX collapse), we claim that the monthly data frequency (like in the case of CATFIN) is not enough to create a valid early warning indicator. At the same time, we claim that the existing indicators of systemic risk are over sophisticated and some of them require huge computing power or access to paid datasets. Therefore, there is a need to create a precise and simple indicator of systemic risk based on a publicly available date with relatively high frequency. In this study, we base on the macro-level data which is easily accessible to the general public to construct a robust systemic risk indicator. We show that our simple metrics can yield similar (or better) results than complex methods and can be computed with a relatively high-frequency using publicly available data, which is a great advantage.
a-comparison-of-the-selected-systemic-risk-indicators
§.§ A comparison of the selected systemic risk indicators
Following the Cleveland Fed's commentary on the performance of their systemic risk indicator (Craig 2020), we agree that a good financial-stress indicator (we may also say a good systemic risk indicator) is reliable, timely, straightforward, valid, and ongoing. Most of the indicators miss some of those features. For example, indicators that base on the balance-sheet data are neither timely nor ongoing, as financial data is provided on a monthly basis to the regulators and it is publicly released on a quarterly basis and with a delay. This means that those indicators can be computed by market regulators with a higher frequency than by the wide public, which is a disadvantage for the market participants. Moreover, some of the indicators are complex and thus they involve a significant model risk. In other words, if two indicators perform the same, the better one is the simpler one. In Table <ref> we provide an overview of the selected systemic risk indicators.
methodology
§ METHODOLOGY
Our methodology is based on the combination of the information hidden in latent process of volatility using the concept of implied and realized volatility. We did it by utilizing the methodology for volatility indices based on <cit.> and <cit.> and the concept of realized volatility for various frequency of data introduced by <cit.> and <cit.>.
Similarly to <cit.>, we construct a dynamic historical ranking evaluating the systemic risk day by day both on the global and country level. What is more important, our methodology can be transformed and adapted in a simple way for the use with high-frequency data and so that such systemic risk indicator can monitor the risk on the real-time basis.
The general formula of the IVRVSRI consist of two component indices which are based on implied (IVSRI) and realized (RVSRI) volatility.
implied-volatility—volatility-indices
§.§ Implied volatility - Volatility indices
One of the first and the widely known volatility index is the VIX index, introduced by CBOE in 2003 and recalculated backward to 1987. Its formula, based on the seminal paper of <cit.>, was described in detail in <cit.> and it can be summarized by the following equation:
σ^2 = 2/T∑_k=i^Δ K_i/K_i^2 e^RT Q(K_i) - 1/T[F/K_0-1]^2
where:
σ = VIX/100,
T - time to expiration,
K_i - strike price of i-th out-of-the-money option; a call if K_i > K_0 and a put if K_i < K_0; both put and call if K_i = K_0,
R - risk-free interest rate to expiration,
F - forward index level derived from index option prices,
K_0 - first strike below the forward index level (F).
The formulas for other volatility indices used in this study (VSTOXX, VNKY, and VXEWZ) are based on the similar methodology and their details can be found in <cit.>, <cit.>, and <cit.>.
realized-volatility-measure
§.§ Realized volatility measure
In the case of historical volatility measure, we use the realized volatility concept (<cit.>). It is based on summation of log returns during a given period of time and annualized in order to combine it later with IV. The formula used in this paper is as follows:
RV_t,i^1M = √(252/21∑_k=0^20 r_t-k,i^2) = RVSRI_i, r_t,i = log(P_t,i/P_t-1,i)
where RV_t,i^1M is the realized volatility for i-th equity index on day t with the memory of 1 calendar month (i.e. 21 trading days), while P_t,i is the price of i-th equity index on day t.
The memory of the realized volatility estimator was set to 21 days (trading days) in order to make it comparable with 30 calendar days in case of VIX.
ivrvsri—implied-volatility-realized-volaitlity-systemic-risk-indicator
§.§ IVRVSRI - Implied Volatility Realized Volaitlity Systemic Risk Indicator
Our methodology has significant advantages compared to other approaches presented in the literature (<cit.>, <cit.>). First, IVRVSRI uses systemic risk indication based on two simple and heavily grounded amongst market participants risk measures (IV and RV). Second, we analyze various financial market turmoils from 2000 until 2023 unhiding the characteristics and severity of major market crisis during the last 23 years. Third, we construct a dynamic ranking (day by day) showing the current level of stress on the global level and additionally separately for USA, Europe, Brazil and Japan. Finally, our methodology can be simply extended by using high-frequency price data for the selected equity indices and the same frequency for volatility indices to mimic the systemic-risk on real time basis.
In order to accomplish this task we construct two component systemic risk indicators based on implied (IVSRI) and realized volatility measures (RVSRI) for each country separately and additionally on the aggregated level for all countries.
implied-volatility-sri
§.§.§ Implied Volatility SRI
IVSRI is based on the separate volatility index for each country, or group of countries, and its share in the total market capitalization. The formula for IVSRI is as follows:
IVSRI = ∑_k=1^N w_i * IV_i
where N is the number of analyzed countries, IV_i denotes the implied volatility index for the i-th country, and w_i is the weight of the given country in SRI, calculated according to:
w_i= MC_i/∑_k=1^NMC_i
where MC_i is the market capitalization fo the given country.
Based on Table <ref> and Equation <ref>, we construct weights vector w = {77.7%, 8.1%, 12%, 2.2%} which will be used in calculations of our risk metrics.
realized-volatility-sri
§.§.§ Realized Volatility SRI
RVSRI is based on similar concept as IVSRI (section <ref>):
RVSRI = ∑_k=1^N w_i * RV_i
where RV_i the realized volatility index for the i-th country.
ivrvsri—implied-volatility-realized-volatility-systemic-risk-indicator-on-the-country-level
§.§.§ IVRVSRI - Implied Volatility Realized Volatility Systemic Risk Indicator on the country level
IVRVSRI can be calculated on the country level (IVRVSRI_i) as the weighted sum of IV_i and RV_i measures for the given country and on the global level:
IVRVSRI_i = w_IV * IVSRI_i + w_RV * RVSRI_i
ivrvsri—implied-volatility-realized-volaitlity-systemic-risk-indicator-on-the-global-level-ivrvsri
§.§.§ IVRVSRI - Implied Volatility Realized Volaitlity Systemic Risk Indicator on the global level (IVRVSRI)
IVRVSRI can be calculated in both ways, based on IVSRI and RVSRI on the global level (formulas <ref> and <ref>):
IVRVSRI = w_IV * IVSRI + w_RV * RVSRI, w_IV + w_RV = 1
where w_IV is the weight of IVSRI component in IVRVSRI (equal to 50%), and w_RV is the weight of RVSRI component in IVRVSRI (equal to 50%).
Alternatively, we can calculate IVRVSRI measure based on country specific IVRVSRI (i.e. IVRVSRI_i)
IVRVSRI = ∑_k=1^N w_i * IVRVSRI_i
The weights w_IV and w_RV are assumed to be equal, however, different different weights may also be used.
dynamic-quartile-ranking-based-on-ivrvsri-dqr_ivrvsri
§.§ Dynamic quartile ranking based on IVRVSRI (DQR_IVRVSRI)
In the next step, we construct the dynamic quartile ranking (DQR_IVRVSRI) based on RVSRI, IVSRI, and IVRVSRI indications, both on the country and on the global level.
The DQR_IVRVSRI on the country level is constructed based on the following steps:
enumi.
* We create quartile map chart based on IVRVSRI_i for each country under investigation,
* This map chart on the daily level shows colored systemic risk indicator,
* Colors indicate the following:
* RED, if IVRVSRI_i is in its 4th quartile based on historical indications → VERY HIGH country-systemic risk,
* ORANGE, if IVRVSRI_i is in its 3rd quartile based on historical indications → HIGH country-systemic risk,
* LIGHT GREEN, if IVRVSRI_i is in its 2nd quartile based on historical indications → LOW country-systemic risk,
* GREEN, if IVRVSRI_i is in its 1st quartile based on historical indications → VERY LOW country-systemic risk.
To construct the index at the global level, we follow the same approach as for the IVRVSRI_i for each country separately, however the map is constructed on the global level.
benchmark-systemic-risk-indicators
§.§ Benchmark systemic risk indicators
From an array of systemic risk measures described in Section <ref>, we select four indicators that will serve as benchmarks for the IVRVSRI. We choose the sRisk, the CATFIN, the Cleveland FED Systemic Risk Indicator, and the CISS. As this choice may seem arbitrary, it is partly based on the availability of the data. Moreover, as the selected measures use different methodologies, we want to check how their indications compare with each other and how accurate they are at predicting financial turmoils. Data for the sRisk is available for many countries, and at the global level, while the CATFIN and Cleveland FED's measures are available only for the US market, and the CISS indicator is only available for Europe.
srisk
§.§.§ SRISK
SRISK is defined as the expected capital shortfall of a financial entity conditional on a prolonged market decline. It is a function of the size of the firm, its degree of leverage, and its expected equity loss conditional on the market decline, which is called Long Run Marginal Expected Shortfall (LRMES). The SRISK calculation is analogous to the stress tests that are regularly applied to financial firms. It is done with only publicly available information, making the index widely applicable and relatively inexpensive to implement for a single entity. The measure can readily be computed using balance sheet information and an appropriate LRMES estimator. Firms with the highest SRISK are the largest contributors to the undercapitalization of the financial system in times of distress. The sum of SRISK across all firms is used as a measure of overall systemic risk in the entire financial system. It can be thought of as the total amount of capital that the government would have to provide to bail out the financial system in case of a crisis. SRISK combines market and balance sheet information in order to construct a market-based measure of financial distress, which is the expected capital shortfall of a financial firm conditional on a systemic event. SRISK depends not only on equity volatility and correlation (or other moments of the equity return distribution), but also explicitly on the size and the degree of leverage of a financial firm.
According to <cit.>, SRISK can be calculated based on the following formulas. They start from the definition of the capital shortfall of firm i on day t:
CS_it = kA_it-W_it=kt(D_it+W_it)
where:
W_it - is the market value of equity,
D_it - is the book value of debt,
A_it - is the value of quasi asset,
k - is the prudential capital fraction (it is assumed on the level of 8%).
Then, they note that when the capital shortfall is negative, i.e., the firm has a capital surplus, the firm functions properly. However, when this quantity is positive, the firm experiences distress. They add, that the main focus will be put on the prediction of the capital shortfall of a financial entity in case of a systemic event defined as a market decline below a threshold C over a time horizon h (<cit.>). Next, they define multiperiod arithmetic market return between period t+1 and t+h as R_mt+1:t+h and the systemic event as R_mt+1:t+h. Additionally, they set the horizon h to 1 month (approx. 22 periods) and the threshold C to -10%. Then, the SRISK is defined as the expected capital shortfall conditional on a systemic event:
SRISK_it = E_t(CS_it+h|R_mt+1:t+h<C) = kE_t(D_it+h|R_mt+1:t+h<C)-(1-k)E_t(W_it+h|R_mt+1:t+h<C)
Additionally, they assume that in the case of a systemic event debt cannot be renegotiated, what implies that E_t(Dit+h)|R_mt+1:t+h<0=D_it, and finally provide SRISK definition using this assumption:
SRISK_it = kD_it-(1-k)W_it(1-LRMES_it) = W_it[kLVG_it+(1+k)LRMES_it-1]
where LVG_it is the quasi-leverage ratio (D_it+W_it)/W_it and LRMES_it is Long Run Marginal Expected Shortfall, i.e. Long Run MES, defined as:
LRMES_it = -E_t(R_it+1:t+h|R_mt+1:t+h<C)
where R_it+1:t+h is the multiperiod arithmetic firm equity return between period t+1 and t+h.
Authors claim that SRISK depend on the size of the firm, its degree of leverage, and its expected equity devaluation conditional on a market decline. SRISK increases when these variables increase. Finally, after pointing that SRISK measure of Equation <ref> provide a point prediction of the level of capital shortfall a financial entity would experience in case of a systemic
event, they provide the formula for SRISK_t measure across all firms to construct system-wide measure of financial distress:
SRISK_t = ∑_i=1^N(SRISK_it)_+
where (x)_+ denotes max(x, 0).
At the end, they add that aggregate SRISK_t should be thought of as the total amount of capital that the government would have to provide to bail out the financial system conditional on the systemic event. At the same time, they admit that they ignore the contribution of negative capital shortfalls (that is capital surpluses) in the computation of aggregate SRISK.
The last, and probably the most time consuming step to calculate SRISK_t, requires specifying a model for the market and firm returns that can be used to obtain estimators of the LRMES. Nevertheless, a number of different specifications and estimation techniques can be used to obtain this prediction. To construct LRMES predictions, the GARCH-DCC model (<cit.>) is used.
cleveland-feds-systemic-risk-indicator
§.§.§ Cleveland Fed's Systemic Risk Indicator
The Cleveland FED's indicator captures the risk of widespread stress in the US banking system (<cit.>). This method of computing the systemic risk indicator (cfSRI) is based on the difference, or spread, between two measures of insolvency risk. The first one is an average of default risk across individual banking institutions (average distance-to-default), while the other one is a measure of risk for a weighted portfolio of the same institutions (portfolio distance-to-default).
cfSRI_t = ADD_t-PDD_t
where ADD_t is the average distance-to-default, while PDD_t is the portfolio distance-to-default.
The narrowing of the spread, resulting from the rising insolvency risk of the banking system as a whole, reflects market perceptions of imminent systematic disruption of the banking system. Fragility in the banking system is indicated when falling PDD converges toward ADD (the narrowing of the spread), even when both PDD and ADD are well in positive territory. A spread that is lower than 0.1 for more than two days indicates major financial stress when the average insolvency risk is rising and major banks are stressed by a common factor. When it stays below 0.5 for an extended period of time, it indicates that the markets are signaling major stress about the banking system.
To gauge the level of systemic risk in the banking system, the component parts of cfSRI have to be calculated, i.e. the average distance-to-default, the portfolio distance-to-default, and then the spread between these two should be interpreted jointly. The average distance-to-default (ADD) reflects the market's perception of the average risk of insolvency among a sample of approximately 100 US banks[The constituents of an exchange-traded fund (ETF) that reflects the banking system in the aggregate: State Street Global Advisors’ SPDR S&P Bank ETF, commonly referred to as “KBE".]:
ADD_t = 1/N∑_i=1^NDD_i,t
where DD_i is a distance-to-default (DD) for an individual bank T periods ahead, which is calculated using the Merton model (<cit.>) for equity valuation as a European call option on the bank's assets A at maturity T.
The portfolio distance-to-default (PDD) is a similar measure that is based on options on a weighted portfolio of the same banks[For this case, it is calculated based on options on an exchange-traded fund (ETF): State Street Global Advisors’ SPDR S&P Bank ETF (KBE), instead its constituents like it was in the case of ADD.]. It is calculated using options on an exchange-traded fund that reflects the banking system in the aggregate. A decreasing ADD or decreasing PDD indicates the market's perception of rising average insolvency risk in the banking sector.
catfin
§.§.§ CATFIN
CATFIN measure (<cit.>) tries to capture the risk of catastrophic losses in the financial system and the statistical approaches to estimating VaR are used for modeling the losses. There are three methodologies used in the VaR estimation: a) direct estimate of the tail risk based on the extreme value distributions, specifically the generalized Pareto distribution (GPD), b) investigation of the shape of the entire return distribution, while providing flexibility of modeling tail thickness and skewness by applying the skewed generalized error distribution (SGED), and c) estimation of VaR based on the left tail of the actual empirical distribution without any assumptions about the underlying return distribution. The first two approaches are known as the parametric methods, whereas the latter one is considered a non-parametric one. In the CATFIN, VaR is estimated at the 99% confidence level using all three methodologies. Then the first principal component is extracted from the three measures. The CATFIN measure bases on the notion that banks are special for a country's economy and the excess monthly returns on all financial firms are used for the estimation. The extreme returns are defined as the 10% left tail of the cross-sectional distribution of excess returns on financial firms.
Final formula for CATFIN combines three VaR measures on a monthly level. Principal component analysis (PCA) is used to extract the common component of catastrophic risk embedded in the three proxies in a parsimonious manner, while suppressing potential measurement error associated with the individual VaR measures. This leads to measure the catastrophic risk in the financial system as of month t, denoted CoVaR, as:
CATFIN_t = 0.570υ_GPD^STD + 0.5719υ_SGED^STD + 0.5889υ_NP^STD
where υ_GPD^STD, υ_SGED^STD, and υ_NP^STD correspond to the standardized VaR measures based on the GPD, the SGED, and the non-parametric methods, respectively.
composite-indicator-of-systemic-stress-ciss
§.§.§ Composite indicator of systemic stress (CISS)
A Composite Indicator of Systemic Stress (CISS) is a broad measure of systemic risk in the financial system. The financial system can be divided into three main building blocks: markets, intermediaries, and infrastructures. Each of these building blocks can be split into specific segments. The financial markets segment can be separated into individual markets like money, equity, bond, currency, and derivatives. Within financial intermediaries, the most important are banks, and insurance companies. The market infrastructure is composed of payment settlement and clearing systems.
CISS aggregates financial stress at two levels: it first computes five segment-specific stress subindices, and then aggregates these five subindices into the final composite stress index. It mostly relies on realized asset return volatilities and on risk spreads to capture the main symptoms of financial stress in the various market segments. The subindices are aggregated analogously to the aggregation of individual asset risks into overall portfolio risk by taking into account the cross-correlations between all individual asset returns and not only their variances. It is essential for the purpose of constructing of the systemic risk indicator to allow for time-variation in the cross-correlation structure between subindices. In this case, the CISS puts more weight on situations in which high stress prevails in several market segments at the same time. The stronger financial stress is correlated across subindices, the more widespread is the state of financial instability according to the “horizontal view” of the definition of systemic stress. The second element of the aggregation scheme potentially featuring systemic risk is the fact that the subindex weights can be determined on the basis of their relative importance for real economic activity. This specific feature in the design of the CISS not only offers a way to capture the “vertical view” of systemic stress, but in doing so it also implicitly accounts for country differences in the structure of their financial systems as long as these actually matter for the transmission of financial stress to the real economy. For the Euro area the subindex weights are: money market 15%, bond market 15%, equity market 25%, financial intermediaries 30%, and foreign exchange market 15%.
comparison-and-forecasting-ability-of-ivrvsri
§.§ Comparison and Forecasting ability of IVRVSRI
correlation-matrix-and-rolling-correlation
§.§.§ Correlation matrix and rolling correlation
In order to compare existing SRIs with our IVRVSRI, first we visualize their levels, returns and descriptive statistics of weekly returns which are used in regressions in the next step. We also calculate the correlation matrices for four distinct benchmarks SRIs, IVRVSRI and S&P500: for weekly returns of all of them and between returns of S&P500 index and lagged weekly returns of SRIs.
forecasting-ability
§.§.§ Forecasting ability
simple-regression-model
Simple Regression Model
The first model (Equation <ref>) is be based on simple regression of S&P 500 index weekly returns regressed on the lagged weekly returns of the given SRI (either a benchmark one, or the IVRVSRI):
r_t^w,SP500 = β_0 + ∑_k=1^pβ_k^ir_t-k^w,SRI_i+ε_t
where:
r_t^w,SP500 - weekly return of S&P 500 index on day t,
β_k^i - sensitivity of r_t^w, SP500 to lag k of the given SRI_i return,
r_t-k^w,SRI_i - lag k of weekly return of the given SRI_i on day t.
Additionally, forecasting abilities of each SRIs are investigated jointly:
r_t^w,SP500 = β_0 + ∑_k = 1^p∑_i = 1^Nβ_k^ir_t-k^w, SRI_i+ε_t
where:
β_k^i - sensitivity of r_t^w, SP500 to lag k of the given SRI_i return,
r_t-k^w,SRI_i - lag k of weekly return of the given SRI_i on day t.
quasi-quantile-regression-model
Quasi-quantile Regression Model
Quasi-quantile Regression Model is our innovative idea where we estimate models described with Equation <ref> and Equation <ref> only for such values r_t^w,SP500 which fulfill the following conditions:
* r_t^w,SP500 r̅_SP500,t^w
* r_t^w,SP500 1st quartile of r_t^w,SP500
* r_t^w,SP500 1st decile of r_t^w,SP500
* r_t^w,SP500 5th percentile of r_t^w,SP500
* r_t^w,SP500 2.5th percentile of r_t^w,SP500
* r_t^w,SP500 1st percentile of r_t^w,SP500
Hence, model specification for this approach is given by:
r_t^w,SP500(p) = β_0 + ∑_k = 1^pβ_k^ir_t-k^w, SRI_i+ε_t
where r_t^w,SP500(p) is weekly return of S&P 500 index on day t conditional on percentile p of its distribution.
quantile-regression-model
Quantile Regression Model
Following <cit.>, we estimate the quantile regression model (Equation <ref>) in order to present a comprehensive picture of the forecasting ability of aggregated systemic risk indicators on the equity market conditioned on the location of the equity index return over its density.
Q_τ(r_t^w,SP500) = β_0(τ) + ∑_k = 1^pβ_k(τ)r_t-k^w, SRI_i+ε_t
where:
Q_τ(r_t^w,SP500) - τ quantile of S&P 500 returns on day t,
r_t-k^w,SRI_i - lag k of return of the given SRI_i,
β_k(τ) - percentage change in τ quantile of the S&P500 index returns produced by the change in the predictor k intervals earlier.
data
§ DATA
Our data set is based on daily data for volatility indices (VIX, VSTOXX, VNKY, and VXEWZ) and daily price and market cap data for equity indices (S&P500, EuroStoxx50, Nikkei 225, Bovespa) in the period between 2000 to 2023. Figure <ref> presents the fluctuations of the analyzed times series, while Figure <ref> fluctuations of returns. Figure <ref> informs us about different magnitude of upward and downward movements on analyzed markets, while Figure <ref> additionally visualize volatility clustering with high and low volatility periods indicating calm and more stressfull periods of time.
Drawdowns of analyzed equity indices, depicted on Figure <ref>, show the length of the most important turmoils and additionally visualize their speed and magnitude.
Descriptive statistics of returns, presented in Table <ref>, confirm the well-known fact about equity returns, i.e. high kurtosis, negative skewness and associated non-normality of returns.
In order to calculate the proper weights in IVSRI, RVSRI and IVRVSRI indicators, we decided to use market capitalization data for each of the equity indices used (Table <ref>).
results
§ RESULTS
Based on the logic “keep it simple”, we want to check if it is possible to create Systemic Risk Indicator based on widely available (most often publicly available and free of charge) volatility risk measures which can have similar properties as systemic risk indicators introduced in highly cited papers (<cit.>, <cit.>, <cit.> or <cit.>) or in the most recent study of <cit.>. In the Results section, we present Figures and map charts visualizing systemic risk indicators and theirs components.
ivrvsris-on-the-country-level
§.§ IVRVSRIs on the country level
Figure <ref> shows the fluctuations of IV indices for each country separately and shows the most significant turmoils affecting the equity market in each country under investigation, i.e. GFC (20087-2009), COVID pandemic (March 2020), and a few of lower magnitude like Eurozone debt crisis (2009-2014), and turmoils in August 2015, February 2018 and November-December 2018.
On the other hand, Figure <ref> presents RV indices for each country separately. Comparing Figure (<ref> with Figure <ref>) shows that the anticipated reaction (IV indices in Figure <ref>) to the current market stress is not always the same as the current reaction revealed in realized volatility of returns (IV versus RV for Japan during Covid pandemic in March 2020).
Overall, our results show that the magnitude of reactions to the risk events varies across countries. Analyzing IVRVSRI indications on the country levels presented on Figure <ref>, we observe a very weak reaction of Japanese markets to COVID-19 pandemic in March 2020 in comparison to the USD and Eurozone, and literally no reaction of Japanese and Brazilian markets to the European sovereign debt crisis in 2009-2014. Only in the case of the GFC 2007-2009 all analyzed markets reacted strongly but the persistence of the crisis was not the same (Figure <ref>). Brazil and Japan recovered quickly with regard to the speed of the decrease of IVRVSRI indications while the USA and Europe were struggling much longer.
Next, Figure <ref> shows the map chart with colored quartile levels of IVRVSRI indications on the country level. It shows that in the case of Eurozone, the GFC extended into the debt crisis and lasted with a small break in 2014 until 2016. In general, before the GFC the Eurozone, Japanese and Brazilian markets were more resilient than the American one to worldwide turmoils while the situation reversed after the Eurozone sovereign debt crisis, with Brazil and Japan being the least resilient in that period among all analyzed countries.
ivrvsris-on-the-global-level
§.§ IVRVSRIs on the global level
Figure <ref> shows the aggregated results for IVRVSRI and its components (IVSRI and RVSRI) on the global level. We can see that after aggregation of the country specific indices all the major financial crises are indicated and additionally we can observed their severity. GFC and Covid were the most severe turmoils, but other ones line the end of downward trend after the Dotcom bubble (2002-2003) and Eurozone debt crisis (2009-2014) are revealed as well. What is more, the reaction of IVSRI and RVSRI components on the global level to the above mentioned turmoils differs with regard to the magnitude of their reaction. Most often, the fear revealed in IVSRI (Panel (1) of Figure <ref>), especially in case of less severe turmoils (Eurozone debt crisis or the bottom of the Dotcom bubble), was not realized in the same magnitude of RVSRI indications (Panel (2) of Figure <ref>).
Figure <ref> presents a colored map chart indicating quartiles of IVSRI, RVSRI, and IVRVSRI on the global level stressing the major turmoils on the aggregated level.
The IVSRI and RVSRI show slightly different risk levels in the “transition” periods when systemic risk changes. In general, we can state that the reaction of the implied-volatility-based metrics is faster than the realized volatility one, which is something we have expected. Moreover, the correlation between the IV-based indicator and the general systemic risk indicator (IVRVSRI) is higher than that of the RV-based ones. At the same time, the general systemic risk indicator (IVRVSRI) is a better indicator of systemic risk than any individual indicator based on only one measure of volatility (RVSRI or IVSRI), and this result is robust even after the change of the weights of the RVSRI and IVSRI in the general systemic risk measure.
Figure <ref> depicts the comparison of fluctuations of S&P500 index and IVRVSRI on the global level. It clearly shows that each major financial turmoil was reflected on our IVRVSRI almost immediately informing market participants about increased level of stress.
comparison-between-sris-and-sp500-index
§.§ Comparison between SRIs and S&P500 index
In this section, in order to see a broader picture for comparison purposes, we present fluctuations of S&P 500 and analyzed SRIs (Figure <ref>), their weekly returns (Figure <ref>) with descriptive statistics (Table <ref>) and correlations (Table <ref>), and finally correlation between weekly returns of S&P 500 index and lagged SRIs (Table <ref>).
forecasting-ability-of-sris
§.§ Forecasting ability of SRIs
In this section we refer to the forecasting ability of IVRVSRI and benchamrk SRIs with regard to the weekly returns of S&P 500 index. Therefore, we present two set of six models for overlapping and non-overlapping weekly returns. In Tables for simple and quasi-quantile regression we report th adjusted R^2, while in tables for quantile regression the pseudo R^2.
Adjusted R^2 was calculated according to the formula:
R^2_𝖺𝖽𝗃 = 1 - (1 - R^2) n - 1/n - p - 1
where n is the number of observations and p is the number of parameters to estimate.
To calculate pseudo R^2 for the quantile regressions, we follow the <cit.> approach, who propose R_𝗉𝗌𝖾𝗎𝖽𝗈^2 as a local measure of goodness of fit at the particular τ quantile. We assume that:
V(τ) = min_b ∑ρ_τ(y_i - x_i^'b)
Let β̂(τ) and β̃(τ) and be the coefficient estimates for the full model, and a restricted model, respectively. Similarly, V̂ and Ṽ are the corresponding V terms. The goodness of fit criterion is then defined as R_𝗉𝗌𝖾𝗎𝖽𝗈^2 = 1 - V̂ / Ṽ.
The list of tested models is presented in the form two sets of models for overlapping and non-overlapping weekly returns: Simple Regression Models (1-lag and p-lag), Quasi Quantile Regression Models (1-lag and p-lag), and Quantile Regression Models (1-lag and p-lag).
regression-models-based-on-the-overlapping-data
§.§.§ Regression models based on the overlapping data
The results (Tables <ref>, <ref>, and <ref>) of the regressions based on the overlapping data (the simple linear regression, the quasi-quantile regression, and the quantile regression) indicate that the explanatory power of SRIs increase when the larger number of lags is used in the model, and when the regression is performed in a deeper part of the tail of the distribution (quasi-quantile for S&P500 index returns in the lowest decile and quantile regressions for τ<0.1).
However, the most important conclusion from the presented regressions is that the lagged weekly returns of the IVRVSRI have the largest explanatory power of the weekly returns of the S&P 500 index for all presented regression models.
regression-models-based-on-the-non-overlapping-data
§.§.§ Regression models based on the non-overlapping data
The results of the regressions based on the non-overlapping data (Tables <ref>, <ref>, and <ref>) yield similar conclusions as for the overlapping data. The explanatory power of the lagged weekly returns of the IVRVSRI is still the highest among all analyzed models, measured by the adjusted R2 pseudo-R2 statistics. Moreover, in the case of the IVRVSRI, the predictive power is increasing while analyzing deeper and deeper parts of the tail distributions which is crucial for the point of view of the indicators that measure systemic risk.
conclusions
§ CONCLUSIONS
In this study, we propose a robust Systemic Risk Indicator based on the well-known concepts of realized and implied volatility measures. The main contribution of this paper to the broad bulk of studies of systemic risk indicators is the simplicity of the metrics that we propose, which at the same time yield similar results as more complex tools, thus significantly reducing the model risk. At the same time, the proposed methodology enables calculation of IVRVSRI on high-frequency data (even on tick level) which significantly decreases the time of response of our indicator to the starting point of each major financial turmoil. Moreover, in the case of many metrics, it is also much less computationally demanding and does not rely on paid data sets or data that is available only for market regulators. The indication of this measure depends on the geographical location of a given equity market. As expected, the robustness of the proposed Systemic Risk Indicator depends on various parameters selected: the memory parameter for RV, time to expiration for IV, the percentile selected for the risk map, and the length of the history selected for the calculation of percentile in case of the risk map.
Referring to the first hypothesis (RH1), we were able to draw the following conclusions. We can not reject RH1 as we show that it is possible to construct a robust Systemic Risk Indicator (IVRVSRI) based on the well-known concepts of realized and implied volatility measures. Moreover, we cannot reject RH2 as the indication of the proposed Systemic Risk Indicator (IVRVSRI_i) depends on the geographical location of a given equity market. As expected, the robustness of the proposed Systemic Risk Indicator depends on various parameters selected in the process of its calcuation: the memory parameter for RV, time to expiration for IV, the percentile selected for the risk map, and the length of the history selected for the calculation of percentile in case of risk map, which supports RH3. Finally, the forecasting ability of IVRVSRI is undoubtly the highest from all SRIs used in this study (the srisk, the CATFIN, the Clevelend FED Systemic Risk Indicator, and the CISS). This conclusion is backed by three versions of our regression models (simple, quasi-quantile, quantile) for two types fo weekly returns (overlapping and non-overlapping data).
This study can be extended by adding more countries to the analysis or other asset classes like currencies, commodities, real estate, cryptocurrencies, and hedge funds. Moreover, using high-frequency data would allow the construction of a real-time early implied volatility realized volatility systemic risk indicator (rteIVRVSRI) that would serve as an early warning indicator of systemic risk. We have a plan to design a website within resources of QFRG in order to publish real time data of IVRVSRI and its component parts for theoretical and practical purposes. Finally, there is a need to prepare detailed sensitivity analysis of IVRVSRI for all crucial parameters assumed in the process of its calculations.
|
http://arxiv.org/abs/2307.09307v1 | 20230714161633 | Ab initio derivation of generalised hydrodynamics from a gas of interacting wave packets | [
"Benjamin Doyon",
"Friedrich Hübner"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"cond-mat.quant-gas",
"math-ph",
"math.MP"
] |
remaRemark[section]
defiDefinition[section]
theoremTheorem
lemmaLemma[section]
propoProposition[section]
corolCorollary[section]
|
http://arxiv.org/abs/2307.06184v1 | 20230712141728 | Integrated supervisory control and fixed path speed trajectory generation for hybrid electric ships via convex optimization | [
"Antti Ritari",
"Niklas Katzenburg",
"Fabricio Oliveira",
"Kari Tammi"
] | math.OC | [
"math.OC"
] |
1
.001
Integrated supervisory control and fixed path speed trajectory generation
Ritari, Katzenburg, Oliveira, Tammi
mode = title]Integrated supervisory control and fixed path speed trajectory generation for hybrid electric ships via convex optimization
1]Antti Ritari[orcid=0000-0002-6883-1447]
[1]
[1]
[email protected]
[1]School of Engineering, Department of Mechanical Engineering, Aalto University, Otakaari 4, 02150 Espoo, Finland
1]Niklas Katzenburg[orcid=0009-0001-7878-0866]
[1]
[email protected]
2]Fabricio Oliveira[orcid=0000-0003-0300-9337]
[email protected]
1]Kari Tammi[orcid=0000-0001-9376-2386]
[email protected]
[2]School of Science, Department of Mathematics and Systems Analysis, Aalto University, Otakaari 1, 02150 Espoo, Finland
[1]Corresponding author
[1]Equal contribution
Battery-hybrid power source architectures can reduce fuel consumption and emissions for ships with diverse operation profiles.
However, conventional control strategies may fail to improve performance if the future operation profile is unknown to the controller.
This paper proposes a guidance, navigation, and control (GNC) function that integrates trajectory generation and hybrid power source supervisory control.
We focus on time and fuel optimal path-constrained trajectory planning. This problem is a nonlinear and nonconvex optimal control problem, which means that it is not readily amenable to efficient and reliable solution onboard.
We propose a nonlinear change of variables and constraint relaxations that transform the nonconvex planning problem into a convex optimal control problem.
The nonconvex three-degree-of-freedom dynamics, hydrodynamic forces, fixed pitch propeller, battery, and general energy converter (e.g., fuel cell or generating set) dissipation constraints are expressed in convex functional form.
A condition derived from Pontryagin's Minimum Principle guarantees that, when satisfied, the solution of the relaxed problem provides the solution to the original problem.
The validity and effectiveness of this approach are numerically illustrated for a battery-hybrid ship in model scale.
First, the convex hydrodynamic hull and rudder force models are validated with towing tank test data.
Second, optimal trajectories and supervisory control schemes are evaluated under varying mission requirements.
The convexification scheme in this work lays the path for the employment of mature, computationally robust convex optimization methods and creates a novel possibility for real-time optimization onboard future smart and unmanned surface vehicles.
convex optimizationfixed pitch propelleroptimal controlenergy management strategyfixed path trajectory generationfuel cell hybrid
[
[
August 12, 2023
===================
§ INTRODUCTION
Emission abatement drivers. Rising environmental awareness and international regulations such as the Paris Agreement, which aims to limit “global average temperature to well below 2 °C above pre-industrial levels” <cit.>, require all industries to take serious efforts to decrease their ghg emissions.
Thus, the imo adopted the Initial imo Strategy on reduction of ghg emissions from ships in April 2018.
Within this resolution, the imo confirms the contribution of international shipping to the goals set in the Paris Agreement.
They strive for a ghg emission reduction between 50% and 70% by 2050 compared to 2008 <cit.>.
Vessels do not only emit ghg though but also other pollutants such as pm2.5, sox and nox, which impact climate and human health <cit.>.
According to <cit.>, international shipping causes approximately 12% of annual, global sox emissions based on the average values between 2007 and 2012.
Without taking any actions, the authors in <cit.> suggest that shipping emissions will lead to approximately 14 million childhood asthma cases – about 16% of the total estimated 86 million childhood asthma cases – and a total of 403,300 annual premature adult deaths.
Densely populated coastal regions – especially in developing and least developed countries – are the most affected <cit.>.
Hybrid architectures. Conventional direct-driven mechanical propulsion exhibits poor efficiency when the vessel operating point lies outside the optimized design point, which increases fuel consumption and emissions. Advanced powertrain configurations based on electrical propulsion, hybrid combination of mechanical and electrical propulsion, and power generation combined with electrical energy storage can deliver the needed adaptability and efficiency.
Electrical propulsion and azimuth thrusters are now the standard architecture in cruise ships that frequently maneuver in ports; a hybrid power source with li-ion batteries as an energy buffer can provide spinning reserve and zero-emission sailing for coastal ferries under strict safety requirements and sensitivity to emissions due to the route's proximity to habitation <cit.>. Similarly, offshore vessels alternating between transit and dynamic positioning operations benefit from battery spinning reserve; mechanical propulsion with power take-in and take-out function from the parallel electric machine has been shown to reduce emissions and increase the performance of naval vessels performing a combination of patrol and littoral operations <cit.>.
Fuel cell hybrid power source is a promising solution for countering vessel ghg emissions and pollutants, at least locally. Fuel cells convert the chemical energy of fuels to electricity by oxidation and reduction reactions. As fuel cells are not heat engines, they are not bounded by the thermodynamic constraints expressed by Carnot’s law <cit.>. The theoretical maximum conversion efficiency is higher compared to internal combustion engines, which lowers fuel consumption and emissions. Proton exchange membrane and solid oxide fuel cells have been identified as the most promising for shipping applications <cit.>. Load leveling and peak shaving functions provided by an energy buffer are essential in fuel cell power generation configuration for extending the useful lifetime and mitigating the limited power output ramp rate of the stacks.
Traditional energy management strategies. The hybrid configurations introduce a control task that consists of determining the setpoints of the various power converters that constituting the powertrain. The control task is called energy management, supervisory control, or tertiary control in relation to the hierarchical control stack, which also includes primary and secondary control <cit.>. The notion of optimal energy management strategy refers to the exploitation of the degrees of freedom in control to minimize a set of criteria that typically represents fuel consumption or pollutant emissions. An important and challenging characteristic in the charge-sustaining battery-hybrid energy management problem is the constraint that requires the State of Charge (SoC) at the end of the voyage to take a value close to its initial value.
All high-performing energy management strategies follow the principle that the internal combustion engine should be operated at favourable conditions at relatively high loads. The traditional strategies aim to achieve this goal by calculating the power converter set points from rules as a function of various measured vessel quantities. Recently, following the success in the automotive sector, optimal control theory and, in particular, Pontryagin's Minimum Principle (PMP) has been investigated in hybrid vessel control as well <cit.>.
Motion planning. Both rule-based and PMP energy management strategies aim to fulfill the operator's request that is unknown in advance to the controller. In unmanned surface vehicles the propulsion and power generation are considered distinct subsystems that receive actuator commands from the guidance, navigation, and control (GNC) system <cit.>. Regardless of manned or unmanned guidance, this function must continuously generate smooth and feasible trajectories for references to the power and propulsion controllers. Information provided by the navigation system, voyage plan, vessel capability, and environmental conditions defines the set of feasible trajectories. The role of path planning and trajectory generation in energy-efficient vessel operation is highlighted by imo <cit.>. The vessel dynamics are emphasized for short sea vessels due to the short overall voyage length and the acceleration and deceleration phases. These phases are especially important in areas with speed limits like archipelagos and close proximity to harbors.
Engine efficiency curve, propeller characteristics, and other power generation-related factors have been recognized to influence the fuel and voyage time optimal trajectory <cit.>. Nevertheless, the synthesis of GNC with hybrid power and propulsion architecture energy management has hardly been investigated yet despite the promise of improved performance. The reason behind this fact is associated with the challenging scale and computational complexity of the nonlinear optimal control problem that arises from the vessel dynamics and propulsor characteristics. Although the optimal solution can be approximated by the dynamic programming method <cit.>, it is unsuitable for real-time applications due to the “curse of dimensionality” – i.e., the computational effort rises exponentially with the number of states <cit.>. Thus, the requirement for real-time control and autonomous decision-making rule out dynamic programming from consideration.
The objective of this study is to develop a framework for synthesizing the trajectory simultaneously with the energy management strategy. The performance of this framework concerns not only optimality in terms of voyage time, fuel consumption, or pollutant emission, but also implementability. The latter criterion refers to the reliability, the memory footprint of the computational implementation, and the processor runtime.
We focus on a trajectory generation problem that takes a previously computed path as input, including spatial constraints (e.g., speed limits).
A higher-level guidance path planning algorithm may have generated a set of feasible alternative paths between waypoints. These alternatives must be evaluated by generating an optimal trajectory and energy management strategy for each alternative.
Convex optimization. To achieve the required efficiency and reliability for real-time application, we propose formulating the integrated control problem as a convex optimization problem. Convex optimization has become a mature technology, being applied in several fields of engineering and science during the last two decades <cit.>.
Examples include – but are not limited to - aircraft design <cit.>, hybrid electric vehicles <cit.>, Formula 1 cars <cit.> and planetary soft landing <cit.>.
Convex optimization problems exhibit a number of favourable properties. Any locally optimal solution is also globally optimal. Infeasibility can be detected unambiguously, i.e., the problem cannot be solved with the given constraints.
Iterative algorithms for solving convex optimization problems self-initialize, which means that they require neither any initial values nor parameter tuning. There also exists a deterministic bound on the number of arithmetic operations needed to solve a convex optimization problem to within any desired accuracy <cit.>. These special properties motivate the use of convex optimization in applications that require fast and reliable autonomous decision making capability.
However, the benefits of convex optimization come at a price: the mathematical model that describes the physical relations must be expressed within the restricted functional forms that yield a feasible convex set.
The challenge arises from formulating the prevailing physics-based models of hull forces, propulsor thrust and torque, power converters and battery energy storage in this restricted functional form without sacrificing consistency with the first-principles high-fidelity nonconvex models.
In this work, the challenge of “convexifying” the original nonconvex problem is met and overcome by formulating a convex three-degree-of-freedom vessel dynamics model. The dynamics are convexified exactly, meaning that we provide a novel equivalent convex formulation. Convex expressions are introduced for hull forces, the fixed pitch propeller, the battery system and converters (internal combustion engine or fuel cell modules).
Contribution. To the authors' knowledge, this model formulation includes three major novelties for the maritime sector.
First, the model is formulated in the spatial domain to convexify the vessel dynamics and take the distance-dependent speed limits and zero-emission legs into account.
Second, a convex fixed-pitch propeller model is introduced, improving the drivetrain modeling.
Third, hydrodynamic forces, acting on the hull and rudder, are modeled in the convex optimization framework and compared against captive test measurements.
Therefore, the main contribution of this work is the convexification of a model for the simultaneous optimization of the trajectory and the energy management strategy for vessels equipped with a hybrid power source and electrical propulsion. This convexification lays the path for the employing mature, computationally robust convex optimization methods and creates a novel possibility for real-time optimization of these systems. In particular, for the application of autonomous vessels, the onboard computers need to make the path-planning decisions rapidly and reliably without a human being in the loop. In this context, convex optimization is the ideal approach.
§ RELATED WORK
§.§ Advanced control strategies in the maritime sector
Due to increasingly demanding reduction goals of ghg emissions, the interest of the maritime industry in battery systems combined with a diesel engine as a hybrid energy supply system has increased over the past years.
This technique promises high fuel savings and ghg reductions on the one hand.
On the other hand, the costs for propulsion systems increase as the complexity rises. <cit.>
Additionally, these hybrid systems are commonly run with conventional heuristic rule-based control algorithms leading to marginal fuel savings only.
Higher savings could be achieved with more advanced controls such as ecms or mpc. <cit.>
mpc in the maritime context is for example studied in <cit.>.
The studies in <cit.> are especially interesting in the context of this work since they include the longitudinal vessel dynamics as well as propeller models for electric propulsion.
However, they do not focus on trajectory planning but on the fluctuations due to waves on the propeller and the load sharing, respectively.
The model in <cit.> targets the control of the propulsion system of a vessel with a controllable pitch propeller.
It takes the vessel design speed as an input and the proposed non-linear mpc is designed to follow that trajectory by adjusting the control inputs of the propeller - drive frequency, propeller pitch ratio and rate of change in propeller pitch ratio - to minimize the reference speed tracking error and the energy consumed by the electric motor.
Additionally, the control inputs and the tracking errors for reference drive frequency and reference propeller pitch ratio are minimized.
Each term in the cost function includes a weighting factor.
The prediction horizon of the mpc is limited to 90 s resolved in 0.1 s steps resulting in only 900 discretization steps in total.
The main goals of the authors in <cit.> are "to minimize the power tracking error [...] and reduce [hybrid energy storage system] losses to improve energy efficiency".
Their hybrid system consists of a battery system and an ultracapacitor.
Two different mpc variants with the vessel speed and motor shaft speed as inputs are developed and compared to achieve optimal load sharing for the aforementioned goals.
The prediction horizon for the receding mpc is limited between 10 and 20 steps.
However, the authors of <cit.> argue that the results are comparable to an offline mpc with a prediction horizon of 100 steps and show exemplary results from a real-time platform.
None of the mentioned mpc studies focuses on the simultaneous, fuel-minimal optimization of propeller shaft speed, load sharing and speed trajectory.
This simultaneous approach combined with a long prediction horizon could achieve even higher savings.
Additionally, the generation of a feasible speed trajectory considering vessel dynamics as well as propeller and energy supply system behavior is required for autonomous shipping.
A similar conclusion regarding the simultaneous approach is drawn in <cit.>, where an extensive literature review covering 57 journal articles published between 2008 and 2020 dealing with the synthesis, design and operational optimization of maritime energy systems is conducted.
As this work focuses on operational optimization, the findings in <cit.> regarding this field are discussed further.
Out of the 57 articles, eleven deal solely with the operational optimization of maritime energy systems.
Nine other articles consider the design as well, and another ten articles also take the synthesis into account.
However, only six of these 30 articles cover the dynamic operation of the vessel.
Two of the six, <cit.> and <cit.>, are of particular interest since they optimize the speed as a control input variable.
They both neglect vessel dynamics such as acceleration and deceleration though and consider the speed to be constant during each leg of the journey.
This is also common practice in maritime logistic problems as seen in <cit.>.
The assumption of constant speed per journey leg might be acceptable for ocean-going vessels because they travel at a constant speed – preferably the respective design speed – for most of the voyage.
Smaller vessels like ro-pax ferries traverse in waters close to coastal areas where speed limits might apply.
Thus, they cannot travel at a single constant speed for most of the journey.
As an additional aspect, the duration of respective acceleration and deceleration phases becomes more significant compared to the overall shorter voyage duration.
Therefore, vessel dynamics must be considered to achieve minimum energy consumption.
To the authors' knowledge, no study exists dealing with the combined approach of optimizing speed profile and energy management for a short sea vessel when taking the longitudinal dynamics into account.
§.§ Convex optimization in optimal control
Starting about a decade ago, convex optimization has become more popular in engineering applications dealing with optimal control problems due to the development of computationally efficient and reliable implementations of interior point methods for cone programming <cit.>.
Especially for integrated design and control of hybrid electric vehicles, the number of publications is rising <cit.>.
Other applications include the operation of battery systems for the frequency regulation market in power grids <cit.>, the energy management in a microgrid for a sustainable community <cit.> and the trajectory planning for robots <cit.>.
The latter, together with <cit.>, which deals with the speed optimization for a hybrid electric race car, are of particular interest because their problems are formulated in the spatial domain.
Modeling in the spatial domain is also applied in this work due to the speed limits.
In maritime applications, convex optimization is scarcely employed.
Some studies employ its subclass of linear programs <cit.> but, rather, more often milp or nlp are used <cit.>.
One example of the latter is the fuel cell hybrid electric vessel in <cit.>.
There, the energy management optimization problem is formulated as a convex minlp to include some piecewise functions.
It is optimized in the time domain and the total voyage duration of ten hours is discretized in steps of 1 h each.
In <cit.>, the minimization of the energy consumption of a tugboat with a hybrid system consisting of a diesel engine and a battery is presented.
The authors formulate a minlp to optimize the load sharing between the different power sources.
According to them, the optimization problem is still efficiently solvable because only three integer variables – limiting the problem size – exist and the sub-routines solve convex problems.
Seven different profiles for vessel speed and bollard pull are evaluated through simulation runs in the time domain.
Again, no holistic approach is implemented.
Instead, only the load sharing is optimized.
In <cit.> a linearized model for mpc of an autonomous ship is developed.
As before, the vessel studied is a tugboat with a hybrid system consisting of a diesel engine and a battery, and several operational profiles are simulated.
It is considered relevant for this literature review because the predicted power demand in the mpc is based on a propeller model and the dynamic vessel speed.
The vessel speed is determined by following a reference trajectory while respecting the vessel dynamics.
However, it is not precisely described how the hydrodynamic resistance is obtained.
§.§ Maneuvering models
The predominant mathematical models of conventional surface vessel maneuvering consider the motion in a horizontal plane in the surge (forward), sway (crossbody) and yaw (heading) directions.
The equations of motion are expressed by applying Newton's second law in a vessel fixed coordinate system.
The hydrodynamic forces and moments are assumed to be functions of the velocities and accelerations of all three degrees.
The unknown force and moment functions are approximated by terms in a Taylor series expansion, which is a concept originally introduced by Abkowitz <cit.>.
The coefficients in the expansion are called hydrodynamic derivatives, which are determined from captive tests for a given hull form.
Numerical integration of the nonlinear differential equations describing vessel motions is used in the design phase to predict trajectories and assess the maneuvering performance of a vessel with given hull form.
Maneuverability is described by a number of parameters, such as overshoot angle in zig-zag maneuver, which are required to receive certain numerical values as imposed by classification societies.
The predictive accuracy of Abkowitz-type models is considered sufficient for certification <cit.>. However, the terms with second and higher order render the differential equations nonlinear and non-convex. Thus, the Abkowitz-type models are incompatible with the convex optimization framework.
The surge force is controlled via the propeller, sway force via bow thrusters and yaw moment via the rudder.
The system is considered underactuated if bow thrusters are unavailable or not used, because only two inputs are available for controlling three degrees of freedom <cit.>.
In steering and tracking control, the equations of motions are typically linearized by dropping all nonlinear terms and assuming that the surge velocity deviates only slightly from the initial velocity.
The linearized equations for steering were introduced by Nomoto in 1957 <cit.>, and are still commonly used.
§.§ Propeller models
Independent from the specific design of a propeller, its performance can be described using its general open water characteristics.
These are based on the forces and momenta of the propeller when operating in open water without any disturbances and are usually expressed using the non-dimensional variables kt kt
kt = T_p/rho_swD_p[^4] n_p^2,
kq kq
kq = Q_p/rho_swD_p[^5] n_p^2,
open water efficiency
η_o = P_p,out/P_p,in = T_pv_a/2πQ_p n_p = kt/kqJ/2π,
and advance coefficient
J = v_a/n_pD_p
= (1 - f_w) v_s/n_pD_p,
where rho_sw is the density of seawater, D_p the diameter of the propeller, n_p the propeller shaft speed, Q_p the propeller torque, P_p,out thrust power and P_p,in the shaft input power <cit.>.
The advance speed v_a describes the average speed of the water across the surface of the propeller.
Due to the interaction of the hull with the surrounding water, the advance speed is lower than the vessel speed.
This is mathematically expressed through the wake fraction coefficient f_w and the relation between advance and vessel speed is given in equation (<ref>).
The open water efficiency as well as kt and kq are often plotted over the advance coefficient in a so-called open water diagram.
An exemplary diagram is shown in Figure <ref>.
Wageningen B-Series.
One of the most fundamental and commonly cited propeller models is developed and published in <cit.>.
The model is based on a regression analysis of the Wageningen B-Series.
kt and kq are represented as polynomials of the advance coefficient,
K̂_T = ∑_n=1^N_T C_T,n J^S_T,n( P_p/D_p)^t_T,n( A_E/A_0)^u_T,n Z^v_T,n
and
K̂_Q = ∑_n=1^N_Q C_Q,nJ^S_Q,n( P_p/D_p)^t_Q,n( A_E/A_0)^u_Q,n Z^v_Q,n
depending on several propeller parameters.
These are the number of blades Z, the extended blade area ratio A_E/A_0 - the ratio of the expanded blade area to the disc area - and the pitch diameter ratio P_p/D_p.
They describe the specific design of the propeller and are explained in the related literature <cit.>.
The analysis in <cit.> is not only limited to the 1q open water diagram, but 4q data is also included, and the respective coefficients for thrust and torque are each represented as a Fourier series with 21 coefficients.
Models based on lift and drag.
Since the model in <cit.> is limited to the Wageningen B-Series, the authors in <cit.> introduce another 4q model based on the lift and drag forces of the propeller to achieve a more versatile model, which was used for an underwater vehicle.
They calculate lift and drag based on sinusoidal functions in order to derive thrust and torque with a rotational transformation.
The required inputs are the effective angle of attack and the total relative velocity.
These are calculated based on an additional fluid model.
The propeller model in <cit.> requires the axial flow velocity to be known and is estimated with a fluid model.
The authors in <cit.> simplify this estimation by calculating the axial flow velocity as the linear combination of the vehicle velocity and the propeller shaft speed for their steady-state model of an underwater vehicle.
In <cit.>, where a low-level thruster controller for a fixed pitch propeller is developed, the equations for thrust and torque are modified to include losses concerning mainly the propeller loading.
These losses are summarized in two different loss factors for thrust and torque, which is a function of thrust.
The thrust and torque equations are simply multiplied by the respective coefficient.
The same model is further investigated in the context of thruster control in <cit.>.
It is also employed in <cit.> in the context of motion prediction with machine learning for ship docking.
In <cit.>, the linear approximation of thrust and torque coefficients and the resulting quadratic models for thrust and torque are modified to account for differences in the axial water flow into the propeller.
The authors neglect the drag in the thrust equation, but they note it could be easily included in case the linear approximation does not yield sufficient accuracy.
They add that the thrust coefficient as a function of the advance coefficient would become a second-order polynomial when including the drag.
4q model for underwater vehicles.
The authors in <cit.> derive a 4q propeller model for the use of underwater vehicles.
They are interested in “developing algorithms for the computation of energy-optimal trajectories for multiple vehicles acting in cooperation” and have found the Wageningen B-Series 4q model to be insufficient due to numerical reasons.
They consider using the model from <cit.>, but discard it due to physical inconsistencies regarding the behaviour of thrust, torque and open water efficiency.
Therefore, they introduce another sinusoidal model based on lift and drag to receive a low-order approximation of the model in <cit.>.
Propeller models in optimization.
Most of the previously mentioned articles either deal with low-level thruster control or underwater vehicles and are included to give an overview over different propeller models.
Articles dealing with problems more closely related to this work include <cit.>.
4q models for controllable and fixed pitch propellers.
In <cit.> and <cit.>, models based on the 4q open water diagram are employed.
The first article focuses on fuel savings for diesel-mechanical propulsion and considers controllable pitch propellers of the Wageningen C- and D-series.
The influence of waves on the propeller performance and the engine loading is considered by including the wave speed in the calculation of the advance speed.
In <cit.>, the energy management for hybrid propulsion with a fixed-pitch propeller of the Wageningen B-series is examined.
1q models for fixed pitch propellers.
The authors in <cit.> use a 1q model based on the open water diagram of the Wageningen B-Series for maneuvering control and energy management of a hybrid vessel with electric propulsion.
Thrust and torque are expressed as quadratic functions of the propeller shaft speed and utilize kt and kq.
The coefficients are given as functions of propeller geometry and advance coefficient.
A similar approach is used in <cit.> for a mpc to optimize the energy efficiency of a vessel.
kt and kq are based on the 1q open water diagram of the Wageningen B-series, but neither thrust nor torque is explicitly calculated.
Instead, the coefficients are used to determine the open water efficiency.
The authors in <cit.> also present a 1q model based on the open water diagram of the Wageningen B-Series.
Their goal is to minimize the wear of the propulsion system and maximize the efficiency of the used hybrid energy supply system when considering propeller load fluctuations.
Therefore, they modify the presented model and suggest using a torque prediction model with online parameter estimation.
The thrust is not estimated.
1q model for controllable pitch propeller.
In <cit.> a propeller is modeled in the context of non-linear mpc for an optimal control problem to minimize the energy consumption of a bulk carrier equipped with electric propulsion.
The propeller torque is modeled as a function of shaft speed and kq.
kq is a second-order polynomial of the advance coefficient - the same is true for kt.
The coefficients for both, kt and kq, depend on the pitch of the propeller.
Comparison of propeller models.
The presented models show how wide is the range of approaches to model the complex behavior of propellers.
Whereas more detailed models rely on high order polynomials or sinusoidal functions, simplified models sufficiently accurate for the optimization of vessel operation are typically based on low order polynomials due to computational efficiency.
Throughout all kinds of studies about fixed pitch propeller models it is common to use the Wageningen B-Series.
§ MINIMUM TIME AND FUEL OPTIMAL CONTROL PROBLEM
This section first lays out the elementary minimum energy and time optimal control problem along a fixed path.
The nonconvexity of this problem is resolved via spatial domain reformulation and variable changes, which are discussed next.
Convex reformulations of hydrodynamic forces, propeller and energy system components are discussed immediately following the derivation of the equations governing the respective subsystem.
The final part of this section derives a condition that, when satisfied, ensures that the reformulated convex problem provides the same solution as the original nonconvex problem.
§.§ Minimum time and energy problem along a fixed path
Dynamics.
We consider a vessel with p degrees of freedom represented by the configuration vector q(t)∈^p and control input vector u(t) ∈^r.
The equations of motion arising from Newton’s second law are of the second-order form in the global frame
R(q) u = M q + τ(q),
where q is the elementwise second time derivative of the state vector, R(q):^p →^p × r is the state dependent rotation matrix, M∈^p × p is the symmetric, positive definite mass matrix and τ(q)∈^p is the disturbance force.
Path.
Let the function sigma : [0,T] → [0,1] denote a path coordinate, such that sigma(0) = 0, sigma(T) = 1 and speed along the path is ṡi̇ġṁȧ > 0, where T is the terminal time <cit.>.
The vector-valued function s : [0,1] →^p defines the path of the vessel as a mapping from the path coordinates to the state vector of the vessel q at every point along the path.
The vessel moves along the path when
s(sigma(t)) = q(t)
is fulfilled for each t ∈ [0,T].
Optimal control problem.
We can represent the dynamics (<ref>) in terms of the path coordinate sigma using the relation (see Appendix <ref>)
q̈(t) = s”(sigma(t))ṡi̇ġṁȧ(t)^2 + s'(sigma(t))s̈ïg̈m̈ä(t).
By introducing the new function
b = ṡi̇ġṁȧ^2
the minimum time and energy motion planning problem (P-CVX) along a fixed path is formulated as a convex optimization problem in the spatial domain:
(P-CVX):
b(sigma),u(sigma)min. ∫_0^1( 1/√(b(sigma)) + ϕ(u(sigma)) ) dsigma
s.t. R(s(sigma)) u(sigma) = M s'(sigma) b'(sigma)/2,
+ M s”(sigma) b(sigma) + τ(s(sigma)), sigma∈ [0,1],
( s'(sigma)^2 b(sigma), u(sigma) ) ∈Π(s(sigma)), sigma∈ [0,1],
where ϕ : ^r → is a convex fuel use function and Π(q) ⊆^p × r is a set valued mapping of convex sets.
The integral of the first term in the objective is T, the terminal time:
∫_0^1dsigma/√(b(sigma)) =
∫_0^1dsigma/ṡi̇ġṁȧ = ∫_sigma(0)^sigma(T)dsigma/ṡi̇ġṁȧ = ∫_0^T1dt = T.
See <cit.> for further discussion of the convexity of problem P-CVX, and its relation to the time domain formulation.
Relaxation and change of variables.
In the following sections, vessel subsystem models that are compatible with the problem formulation P-CVX are discussed.
To support this discussion, auxiliary variables and constraints that will be applied throughout the following sections are introduced.
The explicit dependence of functions on sigma will be omitted in most cases.
§.§ Vessel dynamics and hydrodynamic forces
The vessel is regarded as a rigid body, and the maneuvering model restricts the motion to three degrees of freedom: surge, sway and yaw.
The heel is discarded, meaning the vessel always maintains an upright orientation.
Only still water condition is considered.
This in-plane motion approximation is commonly used for surface vessels <cit.>.
§.§.§ Three-degree-of-freedom vessel model
The navigational position of the vessel is given in an xy-plane fixed to the earth.
The origin (0,0) is located at the initial location of the vessel’s center of gravity.
Let q = (x,y,θ) denote the vessel configuration state vector, where elements (x,y) are the in-plane position and θ is the orientation with respect to x-axis.
The hydrodynamic forces consist mainly of resistance from moving the hull entrained in fluid, and forces generated by the propellers and rudders.
Let u⃗=(T_p, F_D, F_H, F_P, F_R)^⊤ denote the force input vector, where T_p is propeller thrust, F_D is the total drag, F_H is the hydrodynamic hull force from drift, F_P is the hydrodynamic damping force and F_R is rudder steering force.
The force inputs are represented in the frame of the vessel (Figure <ref>).
The rotation matrix maps the force inputs in the vessel frame into the global frame:
R =
[ cos(θ) -cos(θ) -sin(θ) -sin(θ) sin(θ); sin(θ) -sin(θ) cos(θ) cos(θ) -cos(θ); 0 0 L_H -L_P L_R ],
where L_H, L_P and L_R denote the lever arms of drift, damping and rudder forces, respectively.
The mass and inertia matrix is written as
M = diag(m_vessel (1+k_1),m_vessel (1+k_1),I+k_2 I_w),
where m_vessel is vessel mass, I is the moment of inertia of the vessel, I_w is the moment of inertia of displaced water and k_1 and k_2 are dimensionless coefficients of the added inertia <cit.>.
The added inertias account for the fact that accelerating the vessel body at a given rate in fluid requires larger force than in vacuum <cit.>.
The equations of motion arising from Newton’s second law are of the second order form in the global frame,
R(q) u = M q + τ(q) ,
where q is the elementwise second time derivative of the state vector q and τ is the vector of disturbance forces.
The next paragraphs derive the expressions governing the input forces and couple them to the time derivative of the configuration states.
§.§.§ Drift
The nominal orientation of the vessel is assumed to be known, dependent on the position (x,y), and in the direction of traversal.
We allow the true orientation to vary by the angle β from the nominal orientation.
Any deviation from the nominal orientation implies that the vessel has nonzero speed in the sway direction, which is called drifting.
Drift generates a lift force that acts perpendicular to the hull.
The force acts through a point located at distance L_H from the center of gravity towards the bow.
The yawing moment at the center is the moment arm L_H multiplied by the force F_H.
In the presence of drift, the inflow to the hull resembles the flow over a low aspect ratio airfoil at an angle of attack β, which is the drift angle <cit.>.
The lift produced by the oblique flow is
F_H = 1/2rho_sw S C_Lv_s^2,
where S is the area of the lifting surface and v_s is the vessel speed in surge direction:
v_s = ‖ (ẋ,ẏ) ‖_2
= √(ẋ^2 + ẏ^2)
= √(q̇_1^2 + q̇_2^2).
Maximum drift.
The coefficient C_L can be approximated for small β as
C_L = a_L,0 + a_L,1β,
where a_L,0, a_L,1∈_++ are constants.
Then, the maximum lift that can be generated by drifting,
|F_H| ≤rho_sw/2 S C_L,maxv_s^2 ,
is limited by the largest permitted drift angle β_max or the largest permitted lift coefficient C_L,max = a_L,0 + a_L,1β_max, respectively.
Convexification of drift.
The inequality (<ref>) defines a nonconvex set because the function on the right-hand side is convex instead of affine or strictly concave <cit.>.
Thus, with (<ref>) (see Appendix <ref>) and the auxiliary variable b the vessel speed in surge direction is rewritten as
v_s = √((s'_1 σ̇)^2 + (s'_2 σ̇)^2)
= √((s_1^' 2 + s_2^' 2))ṡi̇ġṁȧ
= √((s_1^' 2 + s_2^' 2))√(b).
The first term is hereafter abbreviated as
s'_12 = (s_1^' 2 + s_2^' 2)
and can be directly obtained from the definition of the path, and thus calculated prior to solving the optimization problem.
By inserting (<ref>) into (<ref>) the right-hand side of (<ref>) becomes affine in b and can be used to limit the drift in the convex optimization problem with the constraint
|F_H| - rho_sw/2 S C_L,maxs'_12b≤ 0,
which is given in the standard form of inequality constraints in optimization problems.
§.§.§ Drag
Induced drag from drift is incorporated to the total resistance via the total drag coefficient C_D.
The drag is given by
F_D = 1/2rho_sw C_D A_sv_s^2,
where A_s is the combined wetted surface area of the hull and the rudder. The drag coefficient is
C_D = C_L^2/πΩ + C_F + C_R,
where Ω is the aspect ratio of the lifting surface and C_F is the frictional resistance coefficient, which can be evaluated according to the ITTC-57 empirical formula
C_F = 0.075/(log_10(Rn)-2)^2 , Rn=v_s L/ν,
where L is vessel length and ν is kinematic viscosity of water <cit.>.
In (<ref>), the residual coefficient C_R accounts for wave-making and viscous pressure resistance.
Estimates based on the Froude number F_n=v_s/√(g L) are given in various sources, while a more accurate model is obtained by fitting a convex function to data generated by CFD simulation.
By expressing the coefficient of lift C_L as a function of the drift force from (<ref>) and inserting the result into (<ref>), the drag
F_D = rho_sw/2 (C_F + C_R) A_sv_s^2 + 2 F_H^2 A_s/rho_swπΩ S^2 v_s^2
is obtained.
The added drag due to rudder deflection is discussed in Section <ref>.
Convexification of the drag.
The right-hand side of this formulation is not affine and cannot be included in the convex optimization model.
However, by inserting (<ref>), the right-hand side becomes convex in b and F_H and the relaxed form of (<ref>),
rho_sw/2 (C_F + C_R) A_ss'_12b +
2 F_H^2 A_s/rho_swπΩ S^2 s'_12b - F_D≤ 0
becomes a lower bound for the drag.
§.§.§ Yaw damping
The rotation of the vessel gives rise to a hydrodynamic force and an associated moment which depend on the rate of turn and the surge speed.
The hydrodynamic force dampens turning because the moment acts opposite to the drift moment.
Slender-body theory has been observed to provide a good approximation of force and moment involved in turning <cit.>.
Slender-body theory models the hull as a stack of thin sections that are approximated by simple shapes whose added mass is given by an analytic expression.
The net force prediction from slender-body theory is calculated by integrating strip force at coordinate x over the entire hull <cit.>.
For a vessel with a pointed bow, the added mass at the bow is zero, resulting in net force
F_P = x_T m_a(x_T) v_sθ̇.
Here, x_T is the coordinate for stern.
Assuming draft T is constant along the hull length, and added mass approximated by its value for an ellipse, the strip added mass m_a(x), can be estimated as
m_a(x)=π/2rho_sw T^2,
where the factor 1/2 is applied to include only the lower half of the ellipse <cit.>.
Convexification of damping.
The right-hand side of (<ref>) relate to the product v_sθ̇, which can be rewritten as
v_sθ̇ = s_3 √(b)√(s'_12b)
= s_3 √(s'_12)b,
which is linear in b.
Thus, (<ref>) obtains the form of a linear equation.
§.§.§ Rudder lift
A rudder set at a nonzero angle ω develops a lift force that acts sideways (sway direction) and creates a yaw moment at the center of gravity that turns the vessel.
An angled rudder also creates a drag force that acts in the surge direction.
Downstream velocity.
The velocity of the water flowing to the rudder is substantially higher than the speed of the vessel because the rudder is located downstream from the propeller.
The downstream velocity can be evaluated by means of the thrust loading coefficient,
C_th = T_p/rho_sw/2v_a[^2] π/4D_p[^2]
describing the loading degree of the propeller.
The flow velocity downstream,
v_ds = v_a√(1+C_th) = √(v_a[^2] + T_p/rho_sw/2π/4D_p[^2])
is calculated according to potential flow theory <cit.>.
Due to the close proximity of the propeller and rudder and turbulent mixing of the water jet and surrounding flow, the velocity of the water entering the rudder is slightly lower than v_ds.
The corrected velocity is expressed as as v_R = k_tm v_ds, where k_tm≈ 0.9.
Boundaries of rudder lift force.
The lift force generated by the rudder is given as
L_rudder=rho_sw/2 C_K A_R v_R^2,
where A_R is the projected area of the side view of the rudder <cit.>.
The lift coefficient is
C_K=2 πΛ( Λ + 1 )/( Λ + 2 )^2sin(ω),
where Λ=b_R^2/A_R denotes aspect ratio with the average height b_R and ω is the angle of attack.
The generated rudder force
|F_R| ≤rho_sw/2 C_K( ω_max) A_R v_R^2
= rho_sw/2 C_K( ω_max) A_R k_tm^2 ( v_a[^2]+T_p/rho_sw/2π/4D_p[^2])
is limited by the total angle of attack, set as ω_max.
Convexification of boundaries of rudder lift force.
By inserting (<ref>) into (<ref>), an affine formulation of the boundaries of the rudder lift force is obtained
|F_R| - rho_sw/2 C_K(ω_max) A_R k_tm^2
-( (1 - f_w)^2 s'_12b + T_p/rho_sw/2π/4D_p[^2]) ≤ 0.
Here, T_p is given in (<ref>).
Rudder drag.
We only model the drag due to deflection, because the rudder area and thus the viscous drag is already included in the hull surface area. The drag force <cit.>
D_R = rho_sw/21.1 C_K^2/πΛ A_R v_R^2
yields a convex inequality when the term C_K is expressed in terms of F_R
D_R≥2.2 F_R^2/A_RπΛrho_sw^2 k_tm^2
( (1 - f_w)^2 s'_12b + T_p/rho_sw/2π/4D_p[^2]).
§.§ Propeller model
Fixed-pitch propellers are considered for propulsive thrust generation.
The respective model needs to replicate thrust, torque and efficiency with sufficient accuracy, but it is not required to capture all hydrodynamic effects.
The latter can be achieved with more computationally expensive models during the propeller design process, which is not part of this work.
Thrust and torque.
The first step is to rearrange equations (<ref>) and (<ref>) as
T_p = rho_swD_p[^4] kt( J) n_p^2
and
Q_p = rho_swD_p[^5] kq( J) n_p^2 .
Neither of the equations forms a convex set.
The expressions for kt and kq are polynomials - see (<ref>) and (<ref>) - and hence not easily convexified.
Convexification of thrust and torque coefficients.
The curves in the open water diagram in fig:owd_example suggest the convex, second-order polynomial approximations
kt(J) = - a_T,2J^2 - a_T,1J + a_T,0
and
kq(J) = - a_Q,2J^2 - a_Q,1J + a_Q,0
depending on the advance coefficient.
This assumption is supported by the findings in <cit.>.
In equation (<ref>), a_T,2∈_+, a_T,1∈_+ and a_T,0∈_++ are the coefficients for the approximation function of kt.
They are determined by fitting the second-order polynomial to the original open water diagram.
The same is true for the coefficients of the torque coefficient function, a_Q,2∈_+, a_Q,1∈_+ and a_Q,0∈_++, in (<ref>).
Convexification of the thrust.
By inserting the thrust coefficient approximation from equation (<ref>) in equation (<ref>) and by replacing the advance coefficient with equation (<ref>), the thrust in equation (<ref>) is rewritten as
T_p = rho_swD_p[^4]
( - a_T,2/D_p[^2] v_a^2 -
a_T,1/D_p v_a n_p + a_T,0 n_p^2 ).
As this equation must be represented as an equality constraint, the right-hand side needs to be formulated as an affine expression.
Thus, a new auxiliary variable
n_p = n_p^2
substituting the squared shaft speed is introduced.
Since equation (<ref>) is still not convex due to the bilinear term of advance speed and shaft speed, another auxiliary variable
z = v_s n_p≤√(s'_12bn_p)⇔z - √(s'_12bn_p)≤ 0
is proposed.
It expresses the aforementioned bilinear term of advance and shaft speed as the square root of the auxiliary variables from equations (<ref>) and (<ref>).
With the auxiliary variables in (<ref>), (<ref>), and (<ref>), the thrust is rewritten as
T_p = - a_Ts'_12b
- a_T z
+ a_Tn_p.
For the sake of readability, the auxiliary coefficients a_T, a_T and a_T are introduced.
They are given in app:T_p_coeffs.
The formulation (<ref>) is affine and can be included as an equality constraint in the convex optimization model.
Convexification of the torque.
An analogous approach leads to the affine formulation of the propeller torque
Q_p = - a_Qs'_12b
- a_Q z
+ a_Qn_p.
Once again, auxiliary coefficients a_Q, a_Q and a_Q, given in app:Q_p_coeffs, are used.
Convexification of the propulsive power.
To obtain the last missing value for the propeller model, namely the change in propeller energy input, the equation for the propulsive power is formulated as
P_p,in = 2πQ_p n_p.
Since the power is the derivative of the energy with regard to time, it cannot be included in the convex optimization model.
Therefore, a fictive propeller input force - the index “in” is omitted for the sake of readability -
F_dE_p = dE_p/dsigma = dE_p/dtdt/dsigma = P_p,indt/dsigma = P_p,in/√(b)
is introduced.
By replacing the propulsive power with (<ref>), (<ref>) is rewritten and the change in propeller energy input
F_dE_p = 2πQ_p n_p/√(b)
is derived.
The resulting function cannot be included in a convex optimization model.
Therefore, the propulsive torque is replaced with equation (<ref>).
Additionally, equation (<ref>) is relaxed with the auxiliary variables from equations (<ref>), (<ref>) and (<ref>) and the right-hand side of
F_dE_p≥ - k_dEpz - k_dEpn_p + k_dEpn_p^2/z
is now convex.
Again, for the sake of readability the auxiliary coefficients k_dEp∈_+, k_dEp∈_+ and k_dEp∈_++ are introduced.
They are given in app:coeff_dE_p.
Convexification of propulsive power for reduced model.
The auxiliary variable z can be eliminated from the propeller model by assuming that a_T,1=0=a_Q,1=0, at the cost of larger fitting error.
In this case, the thrust, torque and energy input expressions retain their convexity.
Appendix <ref> provides a proof of the convexity of the energy input expression
F_dE_p≥ 2π( a_Q,0n_p^2 D_p[^5]/√(n_ps'_12b) - a_Q,2√(n_ps'_12b)D_p[^3] )
for the reduced model.
§.§ Subsystem models
§.§.§ Drivetrain model
The energy system under consideration is shown in fig:es_layout.
It consists of a general energy converter, e.g., a fuel cell or a generating set, a battery system, power electronic components and two electric motors with one gearbox each.
Depending on the actual type of the general energy converter, the respective DC/DC converter might need to be replaced by a three-phase rectifier circuit.
This would be the case for a generating set or a gas turbine.
Electric machine input power.
As seen in fig:es_layout, the propeller is drive by the electric machine via a gearbox.
Both components, the gearbox and the electric machine, are lossy.
Based on the assumption of constant efficiencies, the electrical input power per electric machine is
P_EM,in = P_EM,out/η_EM = P_p,in/η_EMη_g,
where η_EM and η_g are the efficiencies of electric machine and gearbox respectively, which are summarized in η̃_EM = η_EMη_g.
If the electric machine and its inverter are treated as one unit, the efficiency of the inverter η_inv can be included in this term η̃_EM = η_EMη_gη_inv.
To be compatible with equation (<ref>) the power needs to be transformed to a purely mathematical force representing the change in energy over distance.
This leads to the electric machine input force
F_em = F_dE_p/η̃_EM.
The index in is once again omitted for the sake of readability.
Electric machine speed limit.
The electric machine model incorporates torque, power and speed limitations.
The shaft speed of the electric machine
n_EM = i_g n_p
and its torque
Q_EM = Q_p/η_g i_g
are derived using the transmission ratio of the gearbox i_g.
As the propeller shaft speed is not included in the model as a variable, the auxiliary variable from equation (<ref>) is used in equation (<ref>) to formulate the upper limit for the shaft speed
n_p - ( n_EM,max/i_g)^2 ≤ 0,
where n_EM,max is the maximum electric machine shaft speed.
Electric machine torque limit.
With the maximum electric machine torque Q_EM,max the torque limitation
Q_p - Q_EM,maxη_g i_g≤ 0
is given.
This is only accurate for speeds below the nominal speed n_EM,n because for higher speeds in the field-weakening region, the maximum torque decreases proportionally to the inverse shaft speed.
This behavior is depicted in fig:EM_char, and it is common to both three-phase induction machines and synchronous permanent magnet machines, if they are controlled appropriately.
Electric machine power limit.
fig:EM_char and the mechanical power equation
P_EM,out = 2π Q_EM n_EM
show that limiting the electric machine output power to a constant value in the field weakening region ensures a proper torque limitation for speeds above nominal speed.
With the inclusion of a constant torque limit and a constant electric machine output power limitation, the electric machine limits can be modeled over the whole relevant speed range.
Recalling the relationship between fictive force, power and inverse speed squared, the limitation for the maximum electric machine power P_EM,max is approximated with a first-order Taylor polynomial
F_EM,max = P_EM,max/√(b_r) - P_EM,max/2 b_r^1.5( b - b_r)
= P_EM,max/2 √(b_r)( 3 - b/b_r)
= P_EM,max/2√(s'_12)/v_s_,r( 3 - s'_12b/v_s[^2]_,r)
around the reference speed squared b_r = v_s[^2]_,r / s'_12.
With this, the upper boundary is formulated as
F_em - F_EM,max≤ 0.
§.§.§ Energy supply system model
The energy supply system does not only need to provide the propulsive energy but also the hotel load P_aux, which is considered constant during the whole trip.
Using this value, as well as the electric machine input force from equation (<ref>), the overall fictive force balance is
k_pF_dE_p/η̃_EM + P_auxy_t + F_batd =
k_cF_c + F_batη_DC/DC,batt
including the energy converter output force F_c, the battery output force F_bat and the force of battery losses F_batd. In (<ref>), a new auxiliary variable
y_t = 1/√(b)
is introduced.
Since 1/√(b) is a convex expression, the inequality
1/√(b) - y_t≤ 0
can be included in the convex optimization problem formulation.
Since the electric machine input force is given per machine respectively per propeller, the number of propellers k_p must be included to retrieve the overall demand for propulsion.
In addition, the efficiency for the DC/DC converter of the battery η_DC/DC,batt needs to be considered.
Due to the convex formulation of the fictive propulsive force in (<ref>), the fictive force balance (<ref>) must be relaxed to
k_pF_dE_p/η̃_EM + P_auxy_t + F_batd -
( k_cF_c + F_batη_DC/DC,batt) ≤ 0.
In Section <ref>, it is shown in (<ref>) that the expression for battery losses is also convex.
As the forces in equation (<ref>) correspond to the output power, the respective efficiency of each energy supply component needs to be taken into account to determine the change in the actual energy stored within the respective energy storage.
The battery combines energy supply and storage in one component whereas the general energy converter only converts and supplies the energy stored in some kind of tank.
§.§.§ General energy converter model
In addition to the battery, the energy supply system includes K general energy converters, e.g., fuel cells or generating sets comprising a generator and a diesel engine.
All K converters are assumed to be of the same making.
The required power input P_c,in, i.e., the power provided by the fuel, of one converter is calculated as a linear function of the output power P_c,
P_c,in = a_c + (a_c - 1) P_c,
which can be directly transferred into the fictive internal force
F_ci = a_cy_t + (a_c - 1) F_c.
The coefficients a_c_,i are determined by the part-load efficiency characteristics of the respective converter.
Operation of multiple converters.
With multiple converters of the same making, it is possible to switch off some of the converters for legs of the trip with low power demand to increase the efficiency of the converters running.
Including this decision in the optimization model would require integer decision variables, thereby destroying the beneficial structure of the convex optimization problem.
However, since zero-emissions must be modeled, the number of converters turned on k_c∈ is included in (<ref>).
Additionally, for the legs of the trip with a speed limit, it is possible to limit the number of converters turned on based on the sum of the minimum power required to propel the vessel at the speed limit.
As the location of both, zero-emission zones and legs of the trip with a speed limit, is fully determined by the path, k_c(sigma) can be calculated prior to the optimization.
Energy converter power limits.
As for the electric machine, the power limit of the energy converter translates to a force limit.
The affine expression for the upper boundary on the converter force
F_c - P_c,max√(s'_12)/2 v_s_,r( 3 - s'_12b/v_s[^2]_,r) ≤ 0
is obtained by a first-order Taylor approximation around the reference speed v_s_,r.
The lower boundary is simply given by
-F_c≤ 0.
§.§.§ Battery system model
The operational behavior of a battery is determined by various electrochemical processes.
Different modeling approaches exist, ranging from detailed electrochemical to “black box” ones.
A typical approach in engineering applications is the equivalent circuit model because it combines sufficient accuracy with low computational cost <cit.>.
The equivalent circuit model consists of a voltage source representing the ocv of the battery, an internal resistance modelling the linear part of the losses and one or more RC networks for modeling the dynamic characteristics.
As this work focuses on energy consumption and its resulting costs, the battery dynamics are not as relevant, and the RC networks are not implemented to simplify the model.
The resulting equivalent circuit is depicted in Figure <ref>.
Another assumption regarding the battery model is the temperature independence of ocv and internal resistance.
This is justified if proper cooling is implemented, which we assume to be the case.
Losses.
The equivalent circuit is used to derive the load-dependent losses
P_batt,d = R_i I_batt^2 = R_i/U_0^2 P_batt^2,
which can be expressed as a function of the internal resistance R_i and the battery current I_batt as well as a function of the battery output power P_batt and the ocv U_0.
The right-hand side is derived by replacing the battery current with the quotient of battery output power and ocv.
This is only possible if the voltage drop across the internal resistance is neglected.
This is a common approach when deriving battery models for convex optimization problems <cit.>.
Other simplifications that are typical for convex battery modeling are the assumption of soc -independent internal resistance and ocv <cit.>.
Convexification of the losses.
To add the battery model to the optimization model, equation (<ref>) has to be transformed into the respective fictive force
F_batd = R_i/U_0^2F_bat√(b) = R_i/U_0^2F_bat[^2]/y_t.
soc update and capacity limits.
In addition to the constraint regarding battery efficiency, several other constraints are required to model the battery operation accurately.
The first one is the battery energy E_bat at an arbitrary point l during the trip.
Using the initial battery energy at the beginning of the trip E_bat,0, the change in battery energy content after travelling a certain distance is expressed as
dE_bat(l) = E_bat(l) - E_batt,0
resulting in the dynamics
dE_batσ = - F_bat√(s'_12).
The energy content of the battery, or the change in battery energy, respectively, is limited due to physical boundaries.
Stricter boundaries,
dE_bat_,min - dE_bat ≤ 0,
dE_bat - dE_bat_,max ≤ 0
can be applied based on the minimum and maximum soc of the battery, SOC_min and SOC_max, respectively.
These are used to improve the battery lifetime and justify the constant voltage assumption in equation (<ref>).
Battery power limits.
Besides restricting the total battery energy, the battery output force also has to be limited.
This represents the power limit in the time domain.
With the maximum charge power P_cha, max and the maximum discharge power P_dis, max of the battery – both values are assumed to be non-negative – the force boundaries are approximated analogous to the one of the electric machine in (<ref>),
F_cha,max = P_cha,max/2√(s'_12)/v_s_,r( 3 - s'_12b/v_s_,r^2),
F_dis,max = P_dis,max/2√(s'_12)/v_s_,r( 3 - s'_12b/v_s_,r^2)
resulting in the constraints
-F_cha,max - F_bat ≤ 0,
F_bat - F_dis,max ≤ 0
for limiting the output power of the battery in the convex optimization model.
soc-sustainability.
Another constraint might be implemented depending on the desired type of operation regarding soc-sustainability.
If the soc of the battery should be the same at the end and the start of the trip the battery is operated soc-sustaining.
In this case, the constraint
dE_bat(1) = 0
has to be added.
§.§ Minimum time and energy problem in spatial domain
Let the vector-valued function with the left-hand side of the given equations
g(b,u) =
[ (<ref>) (<ref>) (<ref>) (<ref>) (<ref>) (<ref>) (<ref>).
. (<ref>) (<ref>) (<ref>) (<ref>) (<ref>) (<ref>) (<ref>)]^⊤
represent all the inequality functions in standard form.
With the state variables x = b dE_bat and the extended input vector
u =
[ F_D F_H F_R y_t.
. D_R z n_p F_c F_bat]^⊤
the minimum time and fuel motion planning problem is
min. ∫_0^1[ (k_ca_c + w_t) y_t + k_ca_cF_c ] dsigma
s.t. R [ T_p - F_D - D_R; F_H + F_P - F_R; F_H + F_P - F_R; ]
= M [ s'b'/2 + s” b] + τ,
dE_bat' = -F_bat√(s'_12),
g≤0⃗,
b(0) = s'_12 v_init^2 b(1) =s'_12 v_final^2,
dE_bat(0) = dE_bat(1),
where T_p and F_P are given in (<ref>) and (<ref>), respectively. The first three constraints are enforced for each σ∈ [0,1].
§.§ Optimality conditions
It is not obvious that the relaxations of (<ref>), (<ref>) and (<ref>) are valid, i.e., that they hold with equality at the optimum and the solution of the relaxed convex problem is the same as the solution of the original nonconvex problem.
The relaxed inequalities (<ref>), (<ref>) and (<ref>) hold with equality at the optimum, i.e.,
y_t[^*] = 1/√(b[^*]),
z[^*] = √(s'_12b[^*] n_p[^*]),
k_cF_c[^*] + F_bat[^*] η_DC/DC,batt =
F_dE_p[^*] k_p/η̃_EM + P_auxy_t[^*]
for σ∈ [0,1], if the propeller parameters satisfy the condition
1 < a_Q1a_T0/a_Q0a_T1-a_Q2a_T0^2/a_Q0a_T1^2
and the battery dissipation power satisfies the condition
P_batt,d(P_batt) ≤P_aux
and the electric machine operates below maximum speed limit
n_EM < i_g n_p
and below maximum torque limit
Q_EM < Q_p/η_g i_g
and the converters deliver positive output power, which translates to the condition on force
F_c > 0
for each σ∈ [0,1].
Let the scalar function f_obj represent the objective function of the problem and the vector function f_dyn the dynamics b' and dE_bat', respectively.
The augmented Hamiltonian of the problem is given by
H = f_obj + Ψ^⊤ f_dyn + λ^⊤ g,
where Ψ is a vector of costates and λ is a vector of nonnegative Lagrange multipliers.
The necessary conditions for optimality given in <cit.> are stated.
* Adjoint system:
Ψ' = - ∂ H/∂x.
* Minimum Principle:
∂ H/∂u= 0.
* Complementary slackness:
λ g = 0.
Force balance.
The Minimum Principle states that
HF_c = k_ca_c + λ_F_c,max - k_cESS = 0,
where ESS and λ_F_c,max are the elements of the vector λ corresponding to the force balance and converter maximum force limit. Since ESS = a_c + λ_F_c,max/k_c > 0, complementary slackness implies that the inequality (<ref>) holds with equality at the optimum.
Inverse speed squared.
Similarly expanding the Minimum Principle gives
Hy_t = k_ca_c + w_t - y_t + ESS( P_aux - R_i/U_0^2F_bat[^2]/y_t[^2]) = 0,
where y_t is the Lagrange multiplier corresponding to the inequality (<ref>). It follows that y_t > 0, if
k_ca_c + w_t + ( a_c + λ_F_c,max/k_c) ( P_aux - R_i/U_0^2F_bat[^2]/y_t[^2]) > 0
holds. The condition reduces to
P_batt,d(P_batt) ≤ P_aux < P_aux + w_t + k_ca_c/a_c +λ_F_c,max/k_c
since the last term on the right-hand side is positive.
Propeller.
Expanding ∂ H/∂z and ∂ H/∂n_p yields
∂ H∂ z = - k_xya_T + z
+ ESSk_p/η̃_EM( - k_dEp -n_p[^2]/z^2) =0,
∂ H∂n_p = k_xya_T - zs_12^'2b/2√(s_12^'2b)n_p
+ ESSk_p/η̃_EM( - k_dEp + k_dEpn_p/z) =0,
where
k_xy = ( Ψ_x2/s'_1 m_vesselcosθ + Ψ_y2/s'_2 m_vesselsinθ).
Let z^*=0, which implies z^* < √((s_12^'2b[^*]n_p[^*])) according to complementary slackness. By solving the expressions above with respect to k_xyη̃_EM/ESS k_p and equating them, we obtain
k_dEp/a_Tn_p^2/z^2 -
2 k_dEp/a_Tn_p/z
+ k_dEp/a_T + k_dEp/a_T = 0,
which is a quadratic equation in n_p/z.
Real valued roots only exist, if
4 ( k_dEp/a_T)^2 - 4 k_dEp/a_T(k_dEp/a_T + k_dEp/a_T) ≥ 0
holds, which simplifies to
k_dEpa_T/a_T^2 - k_dEp/a_T - k_dEp/a_T≥ 0,
since k_dEp > 0.
If this condition does not hold, z = 0 is infeasible, and (<ref>) holds with equality at the optimum.
In instances where the converter is off or idling, F_c=0 and the Lagrange multiplier λ_F_c,min, corresponding to the converter minimum force limit, enters into the term ∂H/∂F_c
HF_c = k_ca_c - λ_F_c,min - k_cESS = 0
which does not necessitate ESS > 0. This case can be treated by the approach introduced in <cit.>, which splits the objective function integral to instances under electrical operation and instances with positive converter power.
§ PATH
§.§ Path parametrization as a parametric polynomial curves
We employ Bézier curves, a type of parametric polynomials, to parametrize the path function s through a finite number of decision variables. Bézier curves are commonly used in higher-level algorithms for planning obstacle-avoiding trajectories. The curves exhibit favourable properties <cit.>:
* every point on the curve is a convex combination of the control points,
* every point on the curve is contained within the convex hull of the control points.
The first property reduces the collision avoidance problem to a tractable convex optimization problem for placing the control points inside convex safe regions, that is, outside of obstacles. The second property ensures that the path is always feasible given that the control points are contained in safe regions.
The curve B is defined as a convex combination of points P_0,...,P_n, called control points
B(σ) = ∑_i=1^n Θ_i,n(σ)P_i,
where σ∈ [0,1]. The coefficients of the convex combination are polynomials of degree n:
Θ_i,n(σ)= [ n; i ]σ^i (1-σ)^n-i, i=0,...,n.
The first and second derivatives of the path are needed for computing the magnitudes of forces due to translational motion, and for computing the orientation angle and angular speed. Moreover, the third derivative of the path is required for computing angular acceleration. To this end, we will make use of the property that the derivatives of B are themselves Bézier curves, which allows us to compute the derivatives exactly at the discretization points σ_0,...,σ_N in the problem implementation.
The kth derivative is
B^[k](σ)=n(n-1)(n-2)… (n-k+1) ∑_i=0^n-kΘ_n-k,i(σ)D_i^k,
where D_i^k is the finite difference
D_i^k = D_i+1^k-1 - D_i^k-1, i=0,...,n-k,
D^1_n-1 = P_n - P_n-1.
We will require that the curve has at least degree four, which ensures that it is at least three times differentiable.
§.§ Orientation and its derivatives
The orientation of the vessel is expressed with respect to the positive x-axis as a function of the path coordinate σ:
s_3(σ) = θ(σ) = atan2(s'_2, s'_1).
The derivative of θ with respect to σ depends on σ via both s'_2 and s'_1. Using the formula for total derivative gives
s'_3 = dθ/dσ =( ∂/∂ s'_2atan2(s'_2, s'_1) ) d s'_2/dσ
+ ( ∂/∂ s'_1atan2(s'_2, s'_1) ) d s'_1/dσ
= s'_1/s_1^' 2 + s_2^' 2 s_2” - s'_2/s_1^' 2 + s_2^' 2 s_1”.
The second derivative is obtained by computing the total derivative of the above expression with respect to σ:
s”_3 = s_2's_1” k_1/k_2^2 - s_1's_2” k_1/k_2^2 - s_2' s_1”'/k_2 + s_1' s_2”'/k_2,
where
k_1 = 2s_1's_1” + 2s_2's_2”,
k_2 = s_1^' 2 + s_2^' 2.
§ VALIDATION
§.§ Comparison of propeller models
In this section, poly2 fitting of poly2 of kt is evaluated for all 180 propellers of the Wageningen B-Series.
The configuration combinations in terms of blade number and expanded area ratio are given in <cit.>.
The pitch diameter ratio is varied between 0.6 and 1.4 in steps of 0.1.
The poly2 of kt and kq is compared with a linear fit and a poly3, which are typically used in marine vessel simulation models <cit.>.
The average relative errors of kt, kq and the open water efficiency compared to the original curves based on equations (<ref>) and (<ref>) are calculated for all propellers and all approximation variants.
The histogram of average errors (Figures <ref>) indicates that the poly2 achieves a lower average relative error for most propellers of the Wageningen B-Series than the linear fit.
The best results are attained with the poly3 which achieves an average relative error of less than 1% in all three categories.
However, the poly3 cannot be integrated into the presented convex optimization model, and the poly2 delivers acceptable results.
Approximately 77% of the poly2 achieve an average relative error of less than 5% in all three categories.
This is deemed tolerable since the motion planning model includes more prominent sources of error originating from the approximation of the hydrodynamic forces.
In addition, the average relative error for the poly2 is driven to these comparably high values due to the large deviation for high advance coefficients.
The operating points of a properly selected propeller are concentrated at the apex of the open water efficiency curve. Thus, the poly2 and the linear fit do not deviate as much from the poly3 for most advance coefficients as the average relative error suggests.
The relative error of poly2 around the most common operating point could be decreased further by approximating the kt and kq coefficients at the point of the most frequent operation since the resulting larger relative errors in other regions are acceptable.
§.§ Hull form
The 5415 modern surface vessel hull form is considered as our test case (Figure <ref>). The 5415 hull form is an open-source design established for benchmarking maneuvering simulation methods. Two open-water propellers, driven by strut-supported shafts, provide propulsion. The hull geometry and relevant loading conditions and speeds are described in Table <ref>.
Captive and free model tests have been conducted for the 5154 hull in the towing tank of Maritime Research Institute Netherlands (MARIN). Captive tests target accurate measurement of hull forces and moments, while the IMO standard zig-zag maneuvers intended for assessing vessel maneuverability, are conducted with free sailing model. Test specifications are reported in <cit.>. We will employ the physical model test data to benchmark against the convexified equations for yaw, drift and rudder forces and moments.
§.§ Captive measurements
Figure <ref> compares predictions of the mathematical model parameterized according to Table <ref> to captive test measurements. Test results for hull hydrodynamic forces and moments are available for two speeds: 1.53 (9.26) m/s and 0.93 (5.56) m/s in model (full) scale. Test results for the rudder are available only for the higher speed at a number of discrete points.
The measurements indicate that the rudder stalls when the angle of attack reaches approximately 20°, developing less lift thereafter. Since the mathematical model does not capture the behaviour of a stalled rudder, the maximum angle of attack is limited to 20°, and only values up to this angle are reported.
The comparison shows good agreement of crossbody force and moment due to drift at low and moderate drift angles, while an underestimation of both force and moment are observed at a high angle and high speed. In contrast, the agreement of longitudinal force is good for high angles and poor for low angles. An underestimation of crossbody force is also observed in yawing at a low speed. Otherwise, the yaw force and moment are captured accurately by the mathematical model.
The measured rudder force and moment exhibit near-linear response within the nominal operating range. Thus, the linear rudder model can be deemed accurate.
§.§ Zig-zag maneuvering test
The zig-zag maneuver is a typical maneuvering test that is performed to assess yaw response characteristics. The test begins at a steady speed with zero rudder angle. The rudder is then deflected to 20°. Once the vessel has turned 20°, the rudder is deflected to -20° and held until the vessel has turned to -20° with respect to the initial heading. The steps are repeated until steady oscillation is reached.
The motion of the vessel, described by the convex hydrodynamic force elements, in the 20/20 zig-zag maneuver is predicted by numerical simulation and compared to tank test data of the same maneuver. In this case, the equations of motion (<ref>) in the time domain are numerically integrated using second-degree Runge-Kutta method with step size 0.01.
The time histories of orientation and rate of turn show that the turning characteristics of the vessel are captured accurately by the convex mathematical model (Figure <ref>). However, the model underestimates drag due to drift, which is observed as a lag of the simulated vessel in the x-y-position subplot, and lower speed reduction.
§ NUMERICAL EXAMPLES
To make use of a numerical optimization algorithm, we create a finite dimensional problem by discretizing x(σ) and u(σ) at n+1 evenly spaced points such that
σ_i+1 - σ_i = dσ
for all i. The finite difference approximation of the derivative is
x'(σ_i) ≈x(σ_i+1)-x(σ_i) /dσ, i=1,...,n.
The path derivatives s'(σ) and s^”(σ) are evaluated at the discretization points according to (<ref>).
The finite-dimensional problem is formulated with the CVXPY <cit.> interface and solved with the primal-dual interior point algorithm implemented in the solver ECOS <cit.>. All the problem instances are solved in less than 0.5 s with n=399.
Figure <ref> (top left) depicts the polynomial curve that characterizes the vessel path in all problem instances. The polynomial is defined by 40 randomly generated control points. Although the discretization points of the path coordinate σ are evenly spaced (σ_i+1 - σ_i = dσ), the points on the path s(σ_i) need not be. This is evident in Figure <ref> that shows 11 path coordinate points along the path. Finally, Table <ref> lists the fixed parameter values applied in all problem instances.
The optimal solution with initial speed 1.0 m/s is observed in Figure <ref>, which shows the time histories of vessel speed and forces acting on the hull and rudder.
The rest of this section investigates how the results vary for different requirements and input parameters.
§.§ Tradeoff between voyage duration and fuel consumption
The objective function of the hybrid vessel problem is a scalar objective that is a sum fuel consumption and voyage time objectives multiplied by the weight ω_T. We aim to delimit a trade-off surface between these competing objectives, that is, the Pareto optimal points. Regardless of the prioritization (value of ω_T) of each objective, if a given point is an optimal point for the problem, then the point is also Pareto optimal due to the convexity of the problem <cit.>. This insight enables one to construct the Pareto surface by solving a sequence of problems with varying weights.
Figure <ref> illustrates the trade-off curve for two vessels with converter configurations P_G,max=25 and P_G,max=50, but otherwise identical. The speed of modern convex program solvers allows generating such trade-off curves in only a few seconds.
The parameters a_c0 and a_c1 for the smaller converter were obtained by scaling the parameters of the larger converter such that both exhibit the same efficiency at the design point, i.e., the maximum power. Both vessels are equipped with two converters of the same size. The trade-off curves were generated by sampling the weight of the sailing time term in the objective from the set
ω_T ∈{ 10, 3.33, 2, 1.25, 0.83}.
Both trade-off curves exhibit increasing marginal fuel consumption for marginal reduction in sailing time. The low power configuration attains lower fuel consumption for any given voyage time, because the converters operate at higher efficiency closer to the design point. However, the fuel efficiency comes at the cost of maximum speed and minimum possible voyage time.
§.§ Battery powered legs
The hybrid power source of the vessel enables emission, noise and exhaust free operation on some subset of the voyage. This section implements two fully battery-powered voyage segments on path coordinate intervals [0.2,0.4] and [0.8,0.9]. The requirement for battery charge sustainability for the complete voyage is preserved. Thus the investigation focuses on the optimal charging of the battery.
From Figure <ref> it can be observed that the optimization algorithm ensures that the battery is charged with sufficient energy for the battery-powered legs while ensuring charge sustainability. In this case the battery discharging power limit does not impose a constraint on the speed of the battery-powered legs.
§ CONCLUSION
Modern marine transportation is moving in the direction of increased autonomy. Motivated by the need for fast and reliable decision making capability onboard, this work implements a physics-based convex optimization model for integrated hybrid power source supervisory control and dynamically feasible speed trajectory generation.
In the model formulation, the three-degrees-of-freedom vessel dynamics under a fixed pitch propeller model are convexified in the spatial domain by constraint relaxations and changes of variables.
The energy supply system consists of a fuel-to-electricity converter, e.g., a fuel cell or a generating set, and a battery to achieve local zero-emission and zero-noise operation. The whole energy system is represented by simplified loss models for each component.
The presented fixed pitch propeller model based on a physically reasonable second-order polynomial fitting of kt and kq is compared to other modeling variants, a linear fit and a poly3.
The results indicate that the chosen approach yields sufficient accurate results around the most important operating points.
Reducing the error in the design point even further, e.g., by means of a Taylor expansion to second order about the propeller design point, could be explored in future work.
The analysis of 180 propellers of the Wageningen B-Series shows that the chosen model can be applied to a range of different propellers.
The velocity signal calculated by the planning algorithm is purely feedforward. A feedback controller needs to be implemented for tracking the signal. The optimal state and control signals respect the equations of motion of the vessel, i.e., they are dynamically feasible, which should leave only a minor tracking error for the feedback controller to clean up. Nevertheless, it is recommended to impose bounds on the control inputs which are lower than true allowable bounds. These conservative bounds account for modeling errors and prevent the feedback controller from becoming unstable <cit.>.
The generated trajectory - and the power demand prediction for the energy supply system - could be used in autonomous shipping as input for lower-level control.
This application requires the underlying optimization problem to be solvable reliably in real-time, which is achieved by the proposed convex optimization model.
In fact, it can be solved sufficiently fast to consider the whole voyage as the power demand prediction horizon if a specialized algorithm is implemented.
The special structure of the problem can be exploited to design custom algorithms <cit.>.
Future research could focus on the implementation of this algorithm in combination with realizing the convex optimization model on a real-time platform to validate the simulations with data from a case study vessel.
Further developments of the presented optimization problem could cover a controllable pitch propeller model and the inclusion of additional resistance sources like shallow water, sea currents and the state of the sea depending on weather data.
§ FUNDING
This research was funded by Business Finland’s Clean Propulsion Technologies project (ref. 38485/31/2020).
§ ACKNOWLEDGMENT
The authors thank Dr. Janne Huotari for his contribution to the development of the methodology presented in this manuscript.
theglossary
model1-num-names
§ THRUST AND TORQUE COEFFICIENT FITTING
§.§ Coefficients of the propulsive thrust function
a_T = a_T,2rho_sw D_p^2 (1 - f_w)^2,
a_T = a_T,1rho_sw D_p^3 (1 - f_w),
a_T = a_T,0rho_sw D_p^4.
§.§ Coefficients of the propulsive torque function
a_Q = a_Q,2rho_sw D_p^3 (1 - f_w)^2,
a_Q = a_Q,1rho_sw D_p^4 (1 - f_w),
a_Q = a_Q,0rho_sw D_p^5.
§.§ Coefficients of the change in propulsive energy function
k_dEp = 2πrho_sw D_p^3 a_Q,2 (1-f_w)^2,
k_dEp = 2πrho_sw D_p^4 a_Q,1 (1-f_w),
k_dEp = 2πrho_sw D_p^5 a_Q,0.
§ CONVEXITY OF REDUCED PROPULSIVE POWER
Let a_1=2π a_Q,0D_p[^5], a_2=2π√(s'_12) a_Q,2D_p[^3], x_1 = n_p and x_2 = b. Using this notation, we define the function f as
f(x_1, x_2)=a_1 x_1^2/√(x_1 x_2) - a_2√(x_1 x_2), (a_1,a_2) ∈ℝ_++.
Convexity of f for (x_1,x_2) ∈ℝ_++ is shown as follows.
Let y_1 ∈ℝ, y_2 >0 and g(y_1,y_2)=y_1^2/y_2.
The function g is a standard quadratic over linear convex function that is increasing in y_1 for y_1≥0 and decreasing in y_2. Let h_1(x_1)=x_1 and h_2(x_1,x_2)=√(x_1 x_2). The function h_1 is linear, i.e., both convex and concave, and h_2 is a standard geometric mean concave function.
The function f can be expressed as
f(x_1,x_2)=a_1 g(h_1(x_1),h_2(x_1,x_2)) - a_2 h_2(x_1,x_2).
According to the general composition theorem, g(h_1(),h_2()) is a convexity preserving operation when g is convex, g is increasing in the first argument, h_1 convex and g is decreasing in the second argument, and h_2 is concave <cit.>.
The negative of a concave function is convex, so the composition -h_2 is convex.
Positive scalar multiplication and addition of convex functions are convexity-preserving operations.
Therefore, f is convex.
§ AUGMENTED HAMILTONIAN
The augmented Hamiltonian of the problem is given by
H() = ( k_ca_c + w_t) y_t + k_ca_cF_c +
[ ψ_x; ψ_y; ψ_θ ]^⊤ 2 diag(s')^-1( M^-1R [ T_p - F_D; F_H + F_P - F_R; F_H + F_P - F_R; ] -s”b )
F_H_,1( F_H - rho_sw/2 S C_L,maxs'_12b) -
F_H_,2( F_H + rho_sw/2 S C_L,maxs'_12b) +
F_D( rho_sw/2 (C_F + C_R) A_ss'_12b + .
. 2 F_H^2 A_s/rho_swπΩ S^2 s'_12b - F_D) +
F_R_,1(F_R - rho_sw/2 C_K(ω_max) A_R k_tm^2 .
. ( (1 - f_w)^2 s'_12b + T_p/rho_sw/2π/4D_p[^2]) )-
F_R_,2(F_R + rho_sw/2 C_K(ω_max) A_R k_tm^2 .
. ( (1 - f_w)^2 s'_12b + T_p/rho_sw/2π/4D_p[^2]) )+
z( z - √(s_12^2 bn_p)) +
ESS( k_pF_dE_p/η̃_EM + P_auxy_t + F_batd - .
. k_cF_c - F_batη_DC/DC,batt)+
n_EM,max( n_p - ( n_EM,max/i_g)^2 ) +
Q_p( Q_p - Q_EM,maxη_g i_g) +
F_EM,max( F_em - F_EM,max) +
F_c_,min( -F_c) +
F_c_,max( F_c - P_c,max√(s'_12)/2 v_s_,r( 3 - s'_12b/v_s[^2]_,r) ) +
y_t( 1/√(b) - y_t) +
dE_bat( -F_bat) +
dE_bat_min( dE_bat_,min - dE_bat) +
dE_bat_max( dE_bat - dE_bat_,max) +
F_cha, max( -F_cha,max - F_bat) +
F_dis, max( F_bat - F_dis,max).
§ DERIVATION OF DYNAMICS IN THE SPATIAL DOMAIN
First, we rewrite the second time derivative of the configuration vector q in (<ref>) using the path coordinate sigma and the fixed path s. The first time derivative is
q̇(t) = dq(t)/dt=ds(sigma(t))/dt=ds(sigma(t))/dsigma(t)d sigma(t)/dt
where the last expression is obtained by applying the chain rule from calculus.
Using the shorthand notation ' to represent derivatives with respect to sigma, the result above can be expressed concisely as
q̇(t)=s'(sigma(t))ṡi̇ġṁȧ(t) .
The second derivative is (we drop the time dependency of sigma(t) and path dependency of s(sigma) for clarity)
q̈(t) = d/dt( ds/dsigmad sigma/dt) = d/dtds/dsigmadsigma/dt+ds/dsigmad^2sigma/dt^2
= d/dsigma( ds/dsigmadsigma/dt) dsigma/dt + ds/dsigmad^2sigma/dt^2
= d^2s/dsigma^2( dsigma/dt)^2 + ds/dsigmad^2sigma/dt^2
= s”(sigma(t))ṡi̇ġṁȧ(t)^2 + s'(sigma(t))s̈ïg̈m̈ä(t)
where the third expression is obtained by applying product rule form calculus. The result (<ref>) is the same as the one derived in <cit.>.
By applying (<ref>) and (<ref>) to the dynamics (<ref>), we obtain a representation with respect to sigma:
R(s(sigma))u = M(s”(sigma)ṡi̇ġṁȧ^2 + s'(sigma)s̈ïg̈m̈ä).
Here, ṡi̇ġṁȧ^2 is the square of the speed of the vessel along the path, and s̈ïg̈m̈ä is the acceleration along the path. Note that the terms s”(sigma) and s'(sigma) are obtained directly from the definition of the path.
The nonlinear squared speed term in (<ref>) renders the equality constraints non-convex. We will now introduce a new function that transforms (<ref>) to a convex form in a lossless manner.
Let
b(sigma) = ṡi̇ġṁȧ^2.
We observe that
ḃ(sigma) = db(sigma)/dt=db(sigma)/dsigmadsigma/dt=b'(sigma)ṡi̇ġṁȧ.
Also directly from the definition (<ref>) follows that
ḃ(sigma) = d(ṡi̇ġṁȧ)^2/dt=d/dt( dsigma/dtdsigma/dt) = d^2sigma/dt^2dsigma/dt + d^2sigma/dt^2dsigma/dt = 2 s̈ïg̈m̈äṡi̇ġṁȧ.
From the equivalency of (<ref>) and (<ref>) follows the relation
b'=2s̈ïg̈m̈ä.
Since the derivative is a linear operator, the dynamics constraint expressed as
2R(s(sigma))u = 2M ( s”(sigma)b(sigma) + s'(sigma)b'(sigma) )
is affine in the decision variables b and u.
get arXiv to do 4 passes: Label(s) may have changed. Rerun
|
http://arxiv.org/abs/2307.04316v3 | 20230710030524 | Accelerating Secure and Verifiable Data Deletion in Cloud Storage via SGX and Blockchain | [
"Xiangman Li",
"Jianbing Ni"
] | cs.CR | [
"cs.CR"
] |
Accelerating Secure and Verifiable Data Deletion in Cloud Storage via SGX and Blockchain
Xiangman Li and Jianbing Ni X. Li and J. Ni are with the Department of Electrical and Computer Engineering and Ingenuity Labs Research Institute, Queen's University, Kingston, Ontario, Canada K7L 3N6. Email: [email protected].
==============================================================================================================================================================================================================================================
Secure data deletion enables data owners to fully control the erasure of their data stored on local or cloud data centers and is essential for preventing data leakage, especially for cloud storage. However, traditional data deletion based on unlinking, overwriting, and cryptographic key management either ineffectiveness in cloud storage or rely on unpractical assumption. In this paper, we present SevDel, a secure and verifiable data deletion scheme, which leverages the zero-knowledge proof to achieve the verification of the encryption of the outsourced data without retrieving the ciphertexts, while the deletion of the encryption keys are guaranteed based on Intel SGX. SevDel implements secure interfaces to perform data encryption and decryption for secure cloud storage. It also utilizes smart contract to enforce the operations of the cloud service provider to follow service level agreements with data owners and the penalty over the service provider, who discloses the cloud data on its servers. Evaluation on real-world workload demonstrates that SevDel achieves efficient data deletion verification and maintain high bandwidth savings.
Cloud storage, secure data deletion, Intel SGX, data outsourcing, verifiability.
§ INTRODUCTION
Outsourcing data to the cloud storage is a common practice for data owners to save the burden of self-managing massive data <cit.>. The data owners can on-demand rent the storage spaces provided by the cloud service providers. The outsourcing data management enables the data owners to access their data at anytime and from anywhere. Due to the appealing features, the cloud storage services, such as Amazon S3 <cit.>, Google Drive <cit.>, Dropbox <cit.>, Apple iCloud <cit.>, and Microsoft OneDrive <cit.>, have attracted a large number of stable and loyal users. Data security is one of the primary concerns for data owners. After data owners outsource their data to the cloud data centers, they lose their physical control over their data. Thus, the data owners have no choice, but relying on the cloud service providers to protect their data. Unfortunately, due to the frequently happened data leakage or breach accidents, security are always ranked as the top threats in cloud storage <cit.>, although the cloud service providers have great efforts for guaranteeing the confidentiality, integrity, and availability of the outsourced data on their cloud servers.
Data deletion <cit.>, as one of important security technologies, has not received sufficient attentions from both data owners or cloud service providers in cloud storage. It provides methods to securely erase data from local storage medium or remote cloud servers, which can significantly reduce the probability of data leakage. Moreover, privacy regulations, such as GDPR <cit.>, CPPA <cit.>, and PIPL <cit.>, have clearly defined the principles of data deletion, called right to be forgotten. The data centers should delete the personal data if they are no longer necessary to the purpose for which it was collected, and the data owners have the right to request the data centers to delete their personal data stored on the data centers. Therefore, data deletion becomes increasingly critical for the service providers, which should provide effective ways to guarantee secure data deletion.
§.§ Related Work
It is not a trivial problem to securely delete data. It is well recognized that there is no existing software-based solution that can provide complete data removal from storage medium. Existing
deletion methods can be summarized in the following categories:
Deletion by unlinking. This method is widely deployed on the file management system in operating systems, such as Windows, IOS, and Linux. When the user would like to delete a file (i.e., press the "delete" button), the operating system delete the link of the file from the file systems, and returns "success" to the user. The file is no longer accessible because the link of the file is removed. Nevertheless, this is not the real file deletion, as the file content still remains on the disk. An adversary can simply use a file recovery tool to access the deleted file by scanning the disk <cit.>.
Deletion by block erasure. This method is utilized by storage mediym, such as solid-state drive (SSD) to securely clean the data. It applies a voltage spike to all available flash memory blocks in unison. Each block is altered with a vendor-specific value and SSD become "clean" <cit.>. However, this method erases all the data on the drive and does cause a small amount of wear.
Deletion by overwriting. Overwriting is an important tool to delete the data by overwriting the data with new, insensitive data, e.g., all zeros. There are multiple tools that can offer 35-pass overwriting times. However, one inherent limitation with the overwriting methods is that they cannot guarantee the complete removal of data. It is effectively impossible to sanitize storage locations by simply overwriting them, no matter how many overwrite passes are made
or what data patterns are written <cit.>. The conclusion holds for not only magnetic drives, but also tapes, optical disks, and flash-based solid state drives. In all these cases, an attacker, equipped with
advanced microsoping tools, may recover overwritten data based on the physical remanence of the deleted
data left on the storage medium. Therefore, although overwriting data makes the recovery harder, it does
not change the basic one-bit-return protocol.
Deletion by encryption. Boneh and Lipton <cit.> proposed the first cryptography-based method for secure date deletion by encrypting data before saving it to the disk, and deleting the data by discarding the decryption key after encryption. This method is desirable when duplicate copies of data are backed up in distributed. However, this method essentially change the problem of deleting a large amount of data to the problem of deleting a short key. However, forgetting a decryption key is non-trivial. The key can be stored on a hard disk is not easy to be permanently deleted, i.e., never be recoverable for an adversary even it obtain the storage medium <cit.>.
However, the problem of key deletion becomes dramatically difficult as the cloud server performs the encryption of the outsourced data in traditional secure cloud storage. Although some cloud storage services enable user-side encryption, i.e., the data owners also can encrypt their data before outsourcing, the server-side encryption is more general. The cloud server encrypts the data after receiving them from the data owners with an encryption key and decrypts the data that the owners would like to access before returning them to the data owners. In this model, the encryption is fully controlled by the cloud servers, which brings the worries of the data owners about the encryption of their outsourced data and secure deletion of the decryption keys.
§.§ Contributions
In this paper, we propose a novel secure and verifiable data deletion scheme, named SevDel, for cloud storage. To reduce the concern that whether the cloud server honestly encrypts the outsourced, we utilize the randomly sampling method and the zero-knowledge proof <cit.> to verify the encryption without retrieving the ciphertexts of the outsourced data. The encryption is also performed based on the Intel SGX <cit.> to prevent the possible data leakage. The enclave is created for each file for the encryption and the management of the keys. Thus, the operation of the deletion of the decryption becomes the destroy of the enclave. In addition, to enforce the cloud servers to protect the outsourced data, the smart contract is designed based on the service-level agreements between the data owners and the cloud service providers. We demonstrate the properties of confidentiality, verifiability, erasability, and auditability of SevDel through security analysis and show that the proposed SevDel has outstanding performance for deployment.
§ SYSTEM AND SECURITY MODELS
In this section, we introduce the system model and security model of our SevDel.
§.§ System Model
We present the system model of SevDel, that comprises three kinds of entities: 1) a data owner that outsources the data to the cloud and requests to delete them after the data is processed or used; 2) a cloud service provider that offers secure cloud storage services (i.e., the outsourced data of data owners are encrypted by the service provider with its chosen secret keys or by the data owners before outsourcing) to data owners with its storage servers in the cloud data center, and each server has high-performance hard disks for data storage and has the Intel Core that supports for SGX <cit.>; and 3) a blockchain node <cit.> that participates the blockchain network to maintain transactions happened between two parties. The blockchain can be the public blockchain, e.g., Bitcoin blockchain, Ethereum blockchain, or Hyperledger. It maintains an automatically executable smart contract that enforces the penalty on the cloud service provider if it leaks the outsourced data of users.
Intel SGX <cit.>, a suite of security-related instructions built into modern Intel CPUs, can create a hardware-protected environment, enclave, for shielding the execution of code and data. An enclave resides in a hardware-guarded memory region called the enclave page cache (EPC) for hosting any protected code and data.
In enclave, SGX performs the encryption of the outsourced data with a secret key stored on the EPC. The deletion of the encrypted data for the data owner is the deletion of the secret key in enclave. More specifically, the secret in enclave is erased after the enclave is destroyed.
§.§ Threat Model
The security threats are mainly from the outsider attackers or the data thief. An outsider attacker or a data thief may compromise the cloud server to steal the data on the hard disks. The frequently happened data leakage incidents on cloud have witnessed the risks of cloud storage services. This risk is high because of potential code vulnerability, and the damage is severe as the data leakage incidents significantly affect reputation. Moreover, the employees in cloud service may steal the data on cloud servers. We have witnessed many data corruption or leakage incidents that occur due to the operation errors or misbehavior of the employees. The main security objective is to protect the cloud data for users against data leakage incidents.
A cloud service provider is the legitimate processor of the Intel SGX and holds the service level agreements with the data owners for maintaining outsourced data. It is expected that the cloud service provider stores the encrypted outsourced data of data owners on the hard disks of cloud servers and deletes the data under the requests of data owners or based on the principles of privacy regulations, like GDPR and CPPA, and PIPEDA. It is assumed that the cloud service provider may not deviate from the expectation due to the agreement with data owners, that is, the cloud service provider is rational. It follows the service level agreements to honestly offer data storage services. Undoubtedly, regulating the implementation of the agreement between the users and the cloud service provider become necessary.
A data owner is an honest party to rent storage spaces from the cloud storage services and outsources the data to the cloud servers in the data center. The data owner chooses the reliable service providers for data outsourcing. According to the modes for protecting cloud data in cloud storage, e.g., Amazon S3 of Amazon Web Service, the owners can determine whether to encrypt their data before outsourcing. The data owners can use secret keys to encrypt their data before outsourcing. If the owners do not encrypt the data, the cloud server chooses the secret keys for data encryption. In this paper, we study secure data erasure for the latter case because it is trivial to achieve data deletion if the data owners encrypt their data by themselves, as they can delete their keys and then no one can read the cloud data.
§ PROPOSED SEVDEL
In this section, we propose the overview and the detailed construction of our SevDel.
§ OVERVIEW
Our SevDel accelerates security and verifiability of cloud data erasure in cloud storage. It can serve as the central element of secure cloud storage and erasure in cloud storage services, such as Amazon S3 Find and Forget, the solution to selectively erase records from data lakes stored on Amazon S3. To prevent data leakage, the received file from the data owner is encrypted by the cloud server with a randomly selected private key with additive homomorphic encryption, such as lifted ElGamal encryption <cit.>. The encryption operation is performed in the enclave of Intel SGX. The encryption of the file is audited by the data owner to ensure that the file is correctly encrypted as claimed by the cloud service provider. The random sampling is utilized to enable probabilistic auditing of the encrypted data and the ciphertexts are aggregated to compress auditing messages. The cloud server proves to the data owner that the entire file is encrypted with lifted ElGamal encryption by a randomly chosen key with a large probability, without retrieving the encrypted file. The challenge here is to ensure that the proved ciphertext is really the encryption of the correct outsourced file. To blind the file and its ciphertext during auditing, the cloud server should prove that the plaintext of the ciphertext is the outsourced file in the homomorphic authentication tags, which are produced by the data owner and outsourced along with the file. Meanwhile, they can also used to verify the integrity of the outsourced file based on provable data possession <cit.> or proof of retrievability <cit.>.
The deletion of the oursourced file on the cloud server is enabled by the deletion of the secret key of the file. If the secret key is permanently deleted, no one is able to decrypt the ciphertext. The secret key deletion is realized by the Intel SGX. An enclave is created when the cloud server receives the file and the encryption is performed in the enclave. Also, the encryption key is stored in the enclave. In order to permanently forget the key, the simple way is to destroy the corresponding enclave.
To ensure the cloud service provider to honestly maintain and encrypt outsourced files of data owners, a smart contract is created based on the service level agreement between the service provider and data owners. the deposits of the service provider are made when the cloud storage service is bootstrapped. The deposits are paid to the data owner if the file of the data owner is found on the Internet, which means that the file is leaked during storage. The condition to trigger the payment is the key point of the smart contract. We convert this data leakage problem to be the provable data possession. If a data owner is succeed to giving a proof that she possesses the encrypted version of her outsourced files, the penalty is performed over the service provider and a certain amount of the deposits is transferred to the data owner. The conversion is valid because only the cloud server has the encrypted version of the outsourced file of the data owner. The cloud server performs encryption after receiving the outsourced file and decryption before returning it to the data owner. The ciphertext of the outsourced file should be only known by the cloud server. Although the data owner knows the cleartext of the file, the data owner obtains the same ciphertext, as the data encryption on the side of the cloud server is probabilistic.
Our SecDel consists of the following algorithms.
Setup: This algorithm is run by the cloud service provider to bootstrap the cloud storage systems. With the input of the security parameter, the algorithm outputs the system parameters and the public-private key pairs of the cloud servers.
Contract: This algorithm is run by the cloud service provider to initialize a smart contract that implements the service level agreement with the data owners. The smart contract is maintained by the blockchain nodes.
KeyGen: This algorithm is run by the data owner. With the input of the system parameters, the algorithm takes the input of the system parameter and generate the public-private key pair of the data owner for data outsourcing.
Outsource: This algorithm is run by the data owner to outsource the file to the cloud server. With the input of the security parameters, the private key of the data owner, and the outsourcing file, the algorithm produces the homomorphic authentication tags of the data blocks of the file and outsource the file, along with the generated tags.
Encrypt: This algorithm is run by the cloud server that encrypts the received file with a randomly chosen private key. With the input of the file, the private key of the cloud server, and the chosen private key, the algorithm outputs the encrypted file, the corresponding public key, and the homomorphic authentication tags of the data blocks of the encrypted file.
Verify: This is an interactive protocol between the cloud server and the data owner to audit the encryption of the outsourced file. The data owner randomly samples the data blocks, and the cloud server generates a proof that proves the encryption of the sampled data blocks. The data owner finally verifies the proof to learn whether the file has been encrypted by the cloud server.
Delete: This algorithm is run by the cloud server who deletes the file under the request of the data owner or the data is no longer needed for data analysis.
Audit: This is an interactive protocol between the data owner and the blockchain nodes. The blockchain nodes randomly samples the data blocks owned by the data owners and the data owner responds the proof that proves the ownership of the encrypted data block. Then, the blockchain nodes verify the proof to learn whether the file has been disclosed. If the proof is valid, the smart contract is executed to give penalty to the cloud service provider.
The correctness of SevDel has the following aspects: 1) The encryption of the outsourced file should be correctly recovered by the cloud server with the corresponding secret key; 2) the data owner can identify that the cloud server does not encrypt the oursourced file on hard disks as agreed with the service level agreement; 3) the deleted outsourced file can be no longer recovered; and 4) the blockchain node can execute the penalty if the data owners find the leaked outsourced data.
§.§ Detailed SevDel
Setup: Let q be a large prime and 𝔾_1, 𝔾_2 and 𝔾_T be three multiplicative cyclic groups of the same prime order p. g_1 and g_2 are the generators of 𝔾_1 and 𝔾_2, respectively. e:𝔾_1 ×𝔾_2 →𝔾_T denotes an admissible bilinear pairing.
The file M to be outsourced is divided into n blocks and each block is further split into s sectors. Thus, the fiel is denoted as M={m_ij}_i ∈ [1,n],j ∈ [1,s] and the abstract information of M is denoted as 𝕀_M.
H:{0,1 }^* →𝔾_1 is a cryptographic hash function that maps the 𝕀_M to a point in 𝔾_1.
The cloud service provider chooses a random number a ∈ℤ_p and calculates A=g_2^a∈𝔾_2. The private key of the data owner is a, and the corresponding public key is A.
Contract: The service provider creates the smart contract CS-SevDel to provide cloud storage services to data owners. To provide the service, the service provider initiates CS-SevDel.Init to setup the smart contract and deposits an amount of money on the blockchain as insurance in CS-SevDel.Service. The a part of the deposit would be sent to the data owner if the outsourced data is leaked and the remainder would be re-fund to the service provider.
KeyGen: An data owner chooses a random number w ∈ℤ_p and calculates W=g_2^w∈𝔾_2. The private key of the data owner is w, and the corresponding public key is W.
Outsource: The data owner chooses s random values x_1,⋯, x_s ∈ℤ_p and
computes u_j=g_1^x_j∈𝔾_1 for j ∈ [1,s].
Then, for each block m_i (i ∈ [1,n]), it computes a tag t_i as
ϕ_i=(H(𝕀_M||i)·∏_j=1^su_j^m_ij)^w.
The data owner outputs the set of homomorphic authentication tags T={ϕ_i}_i ∈ [1,n]. The tag set Φ, the file index 𝕀_M, and the file M are sent to the cloud server.
Encrypt: After receiving (𝕀_M,M,Φ) from a data owner, the cloud server first randomly selects a private key v ∈ℤ_p and computes V=g_1^v ∈𝔾_1. The cloud server uses the random private key v to encrypt each data block of the received file m_ij as E_ij=(E'_ij,E”_ij)=(g_1^m_ijV^r_ij, g_1^r_ij), where r_ij is a random number chosen from ℤ_p. The set of the encrypted blocks is denoted as E={E_i}_i ∈ [1,n]. Then, for each encrypted block E_i (i ∈ [1,n]), the cloud server computes a homomorphic authentication tag σ_i for the encrypted block as
σ_i=(H(𝕀_M||i)·∏_j=1^su_j^E'_ijv_j^E”_ij)^a.
The set of the tags of the encrypted blocks is denoted as Σ={σ_i}_i ∈ [1,n]. Finally, the cloud server stores (𝕀_M,E, Σ) on the hard disks and uploads (𝕀_M, Σ) to the blockchain.
Verify: To verify the encryption of the outsourced file M, the data owner takes the abstract information 𝕀_M as inputs. It selects some data blocks to construct a challenge set Q and picks a random l_i ∈ℤ_p^* for each m_i (i ∈ Q). The challenge (i, l_i)_i ∈ Q is sent to the cloud server.
To respond the challenge, the cloud server generates P_1 as
P_1=∏_i ∈ Q g_1^l_im_ijV^l_ir_ij.
The cloud server computes Q_j=∑_i ∈ Ql_i·m_ij for each j ∈ [1,s].
Then, it computes Q_2 as
P_2=∏_j=1^s ϕ_i^l_i.
π← NIZK {(Q_j, r_ij): P_1=∏_i ∈ Q g_1^l_im_ijV^l_ir_ij, P_2=∏_j=1^s ϕ_i^l_i}.
The data owner verifies the validity of the zero-knowledge proof π to determine whether the outsourced file has been encrypted or not.
Delete: The cloud server deletes the random private key v that is used to encrypt the file F by destroying the enclave that used to store v. The cloud server creates an enclave for each file received and use the enclave to maintain the private key.
Audit: If the data owner obtains the leaked encrypted file F, the data owner can prove to the blockchain nodes that the cloud server has data leakage. The blockchain node selects some data blocks to construct a challenge set R and picks a random γ_i ∈ℤ_p^* for each E_i (i ∈ R). The challenge (i, γ_i)_i ∈ R is sent to the data owner.
To respond the challenge, the data owner generates Q_1 as
Q_1=∏_i ∈ Rγ_i E_ij.
Then, it computes Q_2 as
Q_2=∏_j=1^s σ_i^γ_i.
The data owner returns (Q_1, Q_2) to the blockchain node. The blockchain node verifies (Q_1, Q_2) to determine whether the cloud server has disclosed the file F. If yes, the blockchain node performs CS-SevDel.Penalty to give penalty to the cloud service provider.
The correctness of SevDel can be check that 1) the encryption of the outsourced file is correctly recovered; 2) the verification equation can pass; 3) the security of Intel SGX; and 4) the blockchain node can execute the penalty.
§ SECURITY OF SEVDEL
The security of SevDel should capture the properties of confidentiality, verifiability, erasability, and auditability.
The confidentiality of the outsourced file relies on the semantic security of the data encryption scheme used by the cloud server. SevDel utilizes the lifted ElGamal encryption scheme to encrypt each block of the outsourced file M. Here, each block is independently encrypted with the key V. As the lifted ElGamal encryption scheme can be proved semantic security under the Decisional Diffie-Hellman (DDH) assumption, the confidentiality of the outsourced file is achieve as long as the DDH assumption holds.
(-10,608)
(0,10)(248,600)
(0,554)(100,100)[l] Smart Contract CS-SevDel
(0,542)(200,95)[l] Init: Set state:=INIT, File:={}, Onwer:={}, RU:={},
(0,530)(200,95)[l] Tags:={}, Param:=SerDel(1^λ).
(0,518)(200,95)[l] Service: Upon receiving (“Create", N, file, A, Deposit,
(0,506)(200,95)[l] T_1,T_2,T_3,T_4) from a service provider 𝒮:
(0,494)(200,95)[l] Assert state=INT.
(0,482)(200,95)[l] Assert current time T≤ T_1.
(0,470)(200,95)[l] Assert ledger|𝒮|≥ $Deposit.
(0,458)(200,95)[l] ledger |𝒮|:=ledger|𝒮|–$Deposit.
(0,446)(200,95)[l] Set state:=CREATED.
(0,434)(200,95)[l] Set Accept:=0.
(0,422)(200,95)[l] File:=File∪{𝒮,N,A,Deposit,Accept,T_j=1-4}.
(0,410)(200,95)[l] Agree: Upon receiving (“Accept", 𝒰_i,N,R_i) from a
(0,398)(200,95)[l] data owner 𝒰_i:
(0,386)(200,95)[l] Assert state=CREATED.
(0,374)(200,95)[l] Assert T_1 ≤ T ≤ T_2.
(0,362)(200,95)[l] Assert $R_i >0.
(0,350)(200,95)[l] Assert ledger|𝒰_i|≥ $R_i.
(0,338)(200,95)[l] ledger |𝒰_i|:=ledger|𝒰_i|–$R_i.
(0,326)(200,95)[l] Set Accept:=Accept+1.
(0,314)(200,95)[l] Set state_i:=ACCEPTED.
(0,302)(200,95)[l] Owner_N:=Owner_N∪{𝒰_i}.
(0,290)(200,95)[l] Claim: Current time T=T_2:
(0,278)(200,95)[l] Assert state_i=ACCEPTED.
(0,266)(200,95)[l] Assert the data outsourcing N.
(0,254)(200,95)[l] Set state:=CLAIMED.
(0,242)(200,95)[l] Audit: Upon receiving (“Audit", 𝒰_i,N,c_i,d_i,σ_i,e_i,rk_i,
(0,230)(200,95)[l] 𝒫𝒦_i) from 𝒰_i:
(0,218)(200,95)[l] Assert state=CLAIMED.
(0,206)(200,95)[l] Assert T_2≤ T≤ T_3.
(0,194)(200,95)[l] Assert 𝒰_i∈AU_N.
(0,182)(200,95)[l] Assert 𝒫𝒦_i=1.
(0,170)(200,95)[l] Set state_i:=UPLOADED.
(0,158)(200,95)[l] Set ledger |𝒰_i|:=ledger|𝒰_i|+$R_i.
(0,146)(200,95)[l] Owner_N:=owner_N∪{𝒰_i}.
(0,134)(200,95)[l] File_N:=File_N∪{(𝒰_i,N,σ_i,e_i,rk_i)}.
(0,122)(200,95)[l] Refund: T_3≤ T≤ T_4 and Owner_N=File_N:
(0,110)(200,95)[l] Set state:=FULFILLED.
(0,98)(200,95)[l] Set ledger |𝒰_i|:=ledger|𝒰_i|+$Deposit_i.
(0,86)(200,95)[l] Assert $Deposit=∑_i=1^n$Deposit_i.
(0,74)(200,95)[l] Set state:=FINISHED.
(0,62)(200,95)[l] Penalty: T_3≤ T≤ T_4 and AU_N⊃RU_N:
(0,50)(200,95)[l] Set state:=UNFULFILLED.
(0,38)(200,95)[l] ledger|𝒰_i|:=ledger|𝒰_i|+$R^*_i, for 𝒰_i ∈RU_N.
(0,21)(200,95)[l] Assert ∑_i∈{_N-_N}$R_i =∑_i∈{_N}$R^*_i.
(0,6)(200,95)[l] Set state:=ABORTED.
(0,-6)(200,95)[l] Timer: If state=ABORTED and T>T_4;
(0,-18)(200,95)[l] Set ledger |𝒮|:=ledger|𝒮|+$Deposit.
(0,-30)(200,95)[l] Set state:=ABORTED.
(0,-10)(250,20)Alg. 1. Smart Contract CS-SevDel
The verifiability of the data encryption is achieved based on provable data possession and zero-knowledge proofs. The data owners are able to audit the encrypted data by randomly sampling the encrypted blocks. The homomorphic authentication tags guarantee the authentication of data blocks in the aggregated way. First, the homomorphic authentication tags are created in the way of digital signatures. They are not forgeable under the assumption of computational Diffie-Hellman assumption. Second, it is impossible to generate a proof if the cloud server does not encrypt the sampled data blocks because the proof is the linear aggregation of the tags. Therefore, the verifiability of the data encryption is realized.
The erasability of the data is achieved based on the Intel SGX. The enclave is created for the file when the cloud server receives the file. The enclave is used to maintain the decryption key. The deletion of the data is achieved when the enclave is destroyed. The destroy of the enclave would permanently lose the information in the enclave. According to this feature, the decryption key is lost after the destroy of the enclave. Thus, the encrypted file can never be decrypted, so the file is permanently deleted.
The auditability of data leakage is achieved based on the smart contract. The smart contract makes sure the automatic execution of the service-level agreement between the data owners and the cloud service providers. The condition that triggers penalty is the data leakage incident, so the data owner needs to prove to the blockchain node that they have the leaked data. This proof generation method is the same as the method for data encryption proof, so they are based on the same assumption.
§ CONCLUSION
In this paper, present a secure and verifiable data deletion scheme that leverages the zero-knowledge proof to achieve the verification of the encryption of the outsourced data without retrieving the ciphertexts. The deletion of the encryption keys are guaranteed based on Intel SGX. The proposed scheme implements secure interfaces to perform data encryption and decryption for secure cloud storage and utilizes smart contract to enforce the operations of the cloud service provider to follow service level agreements with data owners and the penalty over the service provider, who discloses the cloud data on its servers.
As the proposed scheme enables the cloud server to handle the service-side encryption, which make the scheme particularly suitable for the popular secure cloud storage services.
IEEEtran
|
http://arxiv.org/abs/2307.05013v1 | 20230711050736 | Anomalies in Weak Decays of Hadrons Containing a b Quark | [
"Aidos Issadykov",
"Mikhail A. Ivanov"
] | hep-ph | [
"hep-ph"
] |
exampleExample
remarkRemark
definitionDefinition
|
http://arxiv.org/abs/2307.05807v1 | 20230711211121 | Can a Chatbot Support Exploratory Software Testing? Preliminary Results | [
"Rubens Copche",
"Yohan Duarte Pessanha",
"Vinicius Durelli",
"Marcelo Medeiros Eler",
"Andre Takeshi Endo"
] | cs.SE | [
"cs.SE",
"cs.HC"
] |
Excitements and Concerns in the Post-ChatGPT Era: Deciphering Public Perception of AI through Social Media Analysis
Weihong Qi
Department of Political Science
University of Rochester
Rochester, USA
[email protected]
Jinsheng Pan, Hanjia Lyu, Jiebo Luo
Department of Computer Science
University of Rochester
Rochester, USA
{jpan24, hlyu5}@ur.rochester.edu, [email protected]
============================================================================================================================================================================================================================================================================================
Tests executed by human testers are still widespread in practice and fill the gap left by limitations of automated approaches. Among the human-centered approaches, exploratory testing is the de facto approach in agile teams. Although it is focused on the expertise and creativity of the tester, the activity of exploratory testing may benefit from support provided by an automated agent that interacts with the human testers.
This paper presents a chatbot, called BotExpTest, designed to support testers while performing exploratory tests of software applications.
We implemented BotExpTest on top of the instant messaging social platform Discord; this version includes functionalities to report bugs and issues, time management of test sessions, guidelines for app testing, and presentation of exploratory testing strategies.
To assess BotExpTest, we conducted a user study with six software engineering professionals. They carried out two sessions performing exploratory tests along with BotExpTest.
Participants were capable of revealing bugs and found the experience to interact with the chatbot positive.
Preliminary analyses indicate that chatbot-enabled exploratory testing may be as effective as similar approaches and help testers to uncover different bugs.
Bots are shown to be valuable resources for Software Engineering, and initiatives like BotExpTest may help to improve the effectiveness of testing activities like exploratory testing.
§ INTRODUCTION
Due to their practical use and broad applicability,
a myriad of bots that vary in complexity have been designed, developed, and deployed in widely varying contexts.
Over the last decade,
technological advancements have enabled bots to play an ever increasingly important role in many areas,
particularly in software development.
This emerging technology has garnered the interest of both software development researchers and practitioners,
as bots can serve as human assistants for a variety of software development-related tasks.
In this particular context,
bots that provide support for specific aspects of software development,
such as keeping project-related dependencies up-to-date,
are referred to as devbots <cit.>.
Recent developments in machine learning algorithms and natural language processing have led to
the creation of bots that provide more user-friendly experiences.
Bots that harness
natural language processing capabilities to provide more intuitive and user-friendly experiences are commonly referred to as chatbots.
As their name implies,
chatbots are software programs designed to replicate human-like conversations or interactions with users <cit.>.
As mentioned,
bots have been utilized to support various software engineering tasks <cit.>.
We set out to examine how chatbots can be leveraged to assist testers throughout the testing process.
Specifically,
we posit that chatbots are well-suited for providing assistance to testers throughout the execution of Exploratory Testing (ET) tasks.
ET is an approach to software testing that entails carrying out a series of undocumented testing sessions to
uncover faults.
ET leverages the skills and creativity of testers
while they explore the system under test (SUT),
and
the knowledge gained during ET sessions is then used to further refine the exploration.
Hence,
ET is a goal-focused, streamlined approach to testing that allows for flexibility in test design
and keeps testers engaged throughout the testing process <cit.>.
Owing to these benefits,
ET has been gaining traction as a complement to fully scripted testing strategies <cit.>:
when combined with automated testing,
ET has the potential to increase test coverage and uncover edge cases.
In fact,
there is evidence suggesting that ET can be equally or even more effective than scripted testing in practical situations <cit.>.
In practice,
before ET sessions, testers engage with other testers and developers to gather project-related information.
However,
due to the complexity of most software projects,
it becomes impractical to collect all relevant information beforehand.
As a result,
interruptions that arise during ET sessions for the purpose of gathering additional information can disrupt the flow.
One potential solution to overcome this issue is to employ a chatbot that assists testers during ET sessions,
providing guidance on the selection of input data for achieving different levels of exploration.
Furthermore,
the chatbot can encourage critical thinking and enable testers to make informed decisions.
To the best of our knowledge,
this research is the first foray into the potential of a
chatbot in maximizing the effectiveness of ET.
This paper introduces ,
a chatbot designed to assist testers during ET sessions.
was built on top of the Discord platform and includes features tailored to managing ET sessions and reporting bugs and issues.
Additionally,
it incorporates features aimed at enhancing testers' ability to gain further insights that can be utilized to delve further into the exploration of the SUT.[The development and evaluation of the current version of BotExpTest took place prior to the release of ChatGPT
and other large language models (LLMs). However, in future work, we delve into the potential integration of these advanced technologies.]
To evaluate how performs “in the wild”,
we conducted a user study with six practitioners.
The results from the user study would seem to indicate that was able to help the participants to uncover several bugs.
Moreover,
the participants expressed a positive opinion about the experience and held an optimistic view regarding the potential future adoption of the tool.
§ RELATED WORK
Chatbots are becoming increasingly popular in the software development domain because they can be very versatile.
In this context, bots are frequently classified based on their capacity of supporting different activities such as code review, tests, bug fixing, verification and deployment <cit.>.
Storey et al. <cit.> surveyed developers and researchers to identify in which situations they use bots to support software engineering activities. Here is what they found:
to search and to share information,
to extract and to analyze data,
to detect and to monitor events,
to communicate in social media,
to connect stakeholders and developers,
to provide feedback, and
to recommend individual or collaborative tasks associated with software development.
Many studies have proposed bots to support software development activities.
Performobot is a chatbot-based application that helps in planning, executing and reporting the results of tasks related to load and performance testing <cit.>.
Smart Advisor is an intelligence augmentation bot that helps developers with project specifics by employing domain and knowledge modeling and in-process analytics to automatically provide important insights and answer queries using a conversational and interactive user interface <cit.>.
Repairnator is a program repair bot that creates software patches and provides an explanation for each bug fixed using natural language as a human collaborator would do <cit.>.
Tutorbot uses machine learning to retrieve relevant content, guiding software engineers in their learning journey and helping them keep pace with technology changes <cit.>.
To the best of our knowledge,
there is no specific chatbot devised to help testers to conduct exploratory testing.
In fact,
a study conducted in Estonia and Finland found out that only 25% of the software testing professionals apply ET with some tool support.
Mind mapping tools are the most frequently used software,
but testers also use text editors, spreadsheets and even actual pen and paper, in addition to checklists and paper notes (e.g. post-its) <cit.>.
In this context,
Copche et al. <cit.> introduced a specific kind of mind map called opportunity map (OM) as a way to improve the ET of mobile apps. The authors conducted a study that compares OM-based ET with a traditional session-based approach (baseline).
There are some tools to directly support activities involved in ET.
For instance, Leveau et al. <cit.> designed a tool called Test Tracker to prevent testers from running tests that have already been executed so they can run more diversified test sessions to further explore the SUT.
There have been some attempts to integrate ET with automated approaches.
For instance, Shah et al. <cit.> proposed an hybrid approach that combines the strengths of ET and scripted testing (ST). Broadly, they identified the weaknesses of ST and proposed to use the strengths of ET as a solution. Similarly, they identified the weaknesses of ET and proposed to use the strengths of ST as a solution. When it comes to ST, for example, test case design quality depends on the test designer skills.
Considering the strengths of ET,
which includes the application of domain knowledge and the observation of the system behavior for rapid feedback, the hybrid process allows the testers to explore the SUT freely and to utilize their intuitions and experience in identifying defects before writing test scripts.
§ BOTEXPTEST
This section presents the design and main features of a chatbot we developed to support ET. We set out by investigating existing work about ET,
its core practices and envisioned how a chatbot would help the tester to conduct more effective ET sessions.
To validate these ideas,
we implemented
.
Figure <ref> shows the example of an interaction with BotExpTest:
tester Beth types ?commands and then shows all commands accepted.
§.§ Implementation
As instant messaging platforms are widely adopted and are today an essential part of software projects,
we opted to develop on top of them.
For this first release, we settled on using the Discord platform[<https://discord.com>].
Discord provides an open source platform with highly configurable features for users and bots.
is implemented as a Node.js project; it has 22 classes and around 1.3K lines of JavaScript code.
It takes advantage of the Discord API to capture interactions from testers in the chat,
as well as to generate its own messages.
To make the ET process auditable, all messages exchanged between the chatbot and the testers are recorded in a MongoDB database.
is available as an open source project at:
<https://github.com/rcopche/BotExpTest>
Figure <ref> presents an overview of the architecture.
The interaction starts with the tester writing a message (command) to via Discord (step 1).
The message passes through the Discord Developer Portal, which is then accessible by means of an API (steps 2-3).
Up to this point, interprets the message typed by the tester and reacts by sending a reply (steps 4-5).
Finally, the 's response is shown to the tester and new interactions may occur.
may also be the one that starts the interaction.
§.§ Main Features
During the first interaction between tester and ,
the first reaction of the chatbot is to present itself,
giving some pieces of information about how to perform the next steps in the ET session.
As a convention,
testers begin an interaction with using a message that starts with '?'.
Adhering to this convention can be beneficial in scenarios where several testers are communicating with each other in a chat,
as it indicates when testers intend to engage with the bot.
For this version,
the features implemented by were elicited, prioritized and implemented; they are described next.
Description of the test procedure:
Using command ?manual, shows a step-by-step description about how the test sessions are organized and should be conducted, as well as the main features provided to tester by the chatbot. This is illustrated in Figure <ref>.
Charters:
In ET, charters are used to organize the tests and represent the goals that are supposed to be achieved in a test session.
provides an interface to set up the test charters and are available to testers by using the message ?charter.
Besides the charter name, app name, and the goals description, it is also possible to attach images and other files related to the charter.
Time management of testing sessions:
In ET, testing sessions are conducted within a limited time frame; usually, testers need to keep track of time.
As the tester is constantly interacting in the chat, reminds her about the remaining time in the session, from time to time.
To signal the start of a session, the tester should type the command ?start.
then asks the time limit and starts to monitor the time elapsed in the session.
During the time range of the session, keeps track of all interactions that occurred. Figure <ref> shows how the time alerts are presented to the tester.
Bug and issue reporting:
The identification of bugs and issues are the main outcomes of a test session. To avoid using other tools for this task, registers occurrences of bugs or issues.
To do so, the tester types command ?report; this task is exemplified in Figure <ref>.
The tester interacts with so that the charter, type (bug or issue), a detailed description and potential attachments (e.g., screenshots of the bug) are provided.
The current version only stores the bug report, but it is possible in future to integrate it with external tools like GitHub, Jira or Azure DevOps.
Curated knowledge about exploratory testing:
The idea of this feature is for to have a curated list of resources about the use of ET techniques.
In future, could be fine-tuned for specific projects so that testers are better equipped to conduct exploratory tests.
Figure <ref> shows the available resources after the tester types command ?help.
By typing one of the presented options, shows a detailed explanation about the concept and how to apply it during the tests.
For some options, the chatbot replies with questions.
We anticipate that this feature may help the testers to gain more insights and execute more effective tests.
Currently, provides resources related to three main groups.
Group (i) brings well-known testing criteria that may help the tester to design black-box tests. For example, classical criteria like equivalence partition and boundary-value analysis are presented.
Group (ii) is composed of strategies for ET that are well-known and have been adopted in the literature <cit.>. For example, Bad Neighborhood Tour instructs the tester to revisit buggy parts of software since bugs tend to cluster together.
Finally, Group (iii) contains guidelines for mobile app testing. For example, there are several test scenarios related to specific characteristics of mobile apps like network connections, geolocation, Bluetooth, camera, and UI events (scrolling, swipe, etc).
Active suggestions:
During a test session, can actively start an interaction with the tester.
The chatbot can present some piece of information obtained from the curated knowledge about exploratory testing.
We believe that actively interacting with the tester would increase the engagement with the tests.
For the current version, the decision about the timing and the information provided is made randomly. In future, we expect that could be evolved to make a more informed decision about the interaction needed at a specific point of time.
§ USER STUDY
This section describes an empirical study conducted with the aim to provide an initial evaluation of .
To this end,
we posed the following research questions (RQs):
* RQ1: How is the interaction with during the exploratory testing?
* RQ2: How do the participants perform with respect to the detection of bugs?
* RQ3: How do the participants perceive ?
To answer these questions,
we conducted a user study with six participants that were asked to use to support the ET of a mobile app named Reminders[<https://play.google.com/store/apps/details?id=com.chegal.alarm>].
All participants work in the industry and have experience with software development and testing.
We adopted the app and related charters that were openly available from Copche et al. <cit.>; the rationale here is to make some analyses concerning similar approaches[We used the same app version, charters and length of test sessions.].
To collect the needed data and observe the participants, we set up a computer with screen recording, and mirroring the mobile device running the app under test.
Each participant was invited to use this computer in order to perform the tasks of the study.
Figure <ref> illustrates the testing environment used by the participants; the reminders app is shown on the left, while the Discord UI (along with ) is presented on the right.
Initially,
we provided the participants with detailed instructions about how to perform the testing tasks.
They were instructed to strictly follow the charter,
report any bug or issue identified, and the time management of the session was supported by .
All test sessions were recorded and the participants could think aloud about the tasks being carried out.
After the introduction,
the study was divided into two parts:
* Training:
this session lasted approximately 15 minutes and served as an introduction to the usage of the chatbot.
This allowed participants to become acquainted with BotExpTest and Discord,
allowing them to experiment with possible interactions and explore the supported commands.
* Test sessions: this iteration took approximately half an hour,
in which two test sessions with 15 minutes each occurred. Over the course of these two test sessions,
the chatbot-enabled ET took place.
All data was then retrieved and analyzed.
To answer RQ1,
we looked at the interactions that occurred (messages exchanged) between the participant and .
We called Active Interactions the ones started by ,
while Reactive Interactions are responses to the tester's inquiries.
As for RQ2,
we analyzed and cataloged the bugs reported by participants.
In particular,
we cross-checked the bugs herein reported with the ones uncovered in Copche et al. <cit.>.
As an initial analysis,
we intended to assess whether a chatbot-enabled ET can detect a different set of bugs with respect to similar approaches like baseline and OM (see Section <ref>).
We answered RQ3 with a Likert-scale survey that intends to understand the perception of participants.
The questions are divided into (i) how easy is to interact and use , (ii) whether the user interface is adequate for the proposed functionalities, and (iii) understanding the participants' perception about the effectiveness of chatbot-enabled ET.
There is also an open question for comments and suggestions.
§.§ Analysis of Results
*RQ1 - Interactions with BotExpTest.
Table <ref> shows the number of active and reactive interactions of both the chatbot and the participants; the values are also divided between training and test sessions. Overall, produced 581 interactions (121 in training and 460 in the test sessions), while the six participants had 496 interactions (144 in training and 352 in the test sessions).
During training, was proportionally more reactive (reactive/active – 107/14) and the participants were more active (37/100); they also typed 7 invalid commands.
The most typed commands were related to the chatbot usage (?charter, ?commands, ?help, ?manual) and software testing resources.
We observed that the participants were more active in the interactions because they were focused on figuring out how to use .
As for test sessions, was still more reactive (340/120) but proportionally had more active interactions.
This occurred due to the reporting of bugs and issues (it asks actively for more pieces of information).
This fact also impacted the participants' interactions: they were more reactive (262/90) and did not type any invalid command.
The most typed commands were related to bug and issue reporting (e.g., ?report) and management of the test sessions (e.g., ?start, ?charter).
We also observed that participants were focused on testing the app and this also limited the interactions with the chatbot.
Answer to RQ1: We observed reasonable interactions between the participants and . As the participants were exploring the chatbot in training, they had more active interactions. On the other hand, the interactions were more reactive and limited in the test sessions due to the time spent with exploratory testing of the app itself and reporting bugs and issues.
*RQ2 - Bugs.
The six participants reported 31 bugs.
The most effective participant uncovered nine bugs, while one of them reported three bugs. On average, the participants detected 5.2 bugs (median: 5). The distribution of bugs detected per participant can be seen in Figure <ref>, the boxplot/violin in the middle.
Figure <ref> also shows the results for the baseline and OM approaches <cit.>.
Observe that the average and median values of are slightly greater than baseline (avg: 2.7, median: 3) and OM (avg: 4.3, median: 4).
The range of values is also smaller, so produced a more uniform performance of participants.
Due to the differences of samples and participants' experience, these results may not be generalized.
One may argue that the bug detection capability of is at least comparable to approaches without any support (i.e., baseline) or that adopted some supporting artifact (i.e., OM).
Figure <ref> shows a Venn diagram with the unique bugs detected by the approaches,
numbers between parentheses are bugs tracked to specific suggestions of the approach.
Out of 31 bugs reported by the participants using ,
21 were unique since some reported the same bug.
Eight bugs have been uncovered in the Copche et al. study (1 by baseline, 3 by OM and 4 by both),
but 13 yet-unknown bugs were detected in this study.
We were able to map three out of these 13 bugs to specific insights provided by the chatbot.
Answer to RQ2:
The participants were capable of uncovering an average of 5.2 bugs using .
Their performance is comparable to similar approaches evaluated in the literature.
Furthermore,
supported the participants in detecting 13 previously unknown bugs.
*RQ3 - Participants' perception.
For the part (i) of questions, we asked about the easiness of finding information, whether proper instructions are provided, and usability in general. All participants agreed or strongly agreed that is easy to use and interact.
As for part (ii), the questions asked about specific features like bug/issue reporting and time management of test sessions. Most participants strongly agreed or agreed that the user interface for those features is adequate. In particular, the bug/issue reporting was unanimously well-evaluated (all strongly agreed).
For the last part of questions, the responses indicated that participants would use a chatbot in similar tasks, and they perceived more organized test sessions. They also thought that helped them to find more bugs.
We also asked if the chatbot helped them to understand new concepts of software testing, and whether the suggestions made by were helpful; all participants strongly agreed or agreed with those statements.
From the open question and our observations, we draw the following thoughts.
Participants believed that the chatbot helped to shorten the time spent with process tasks, saving more time to test the app.
worked as a rich and centralized source of testing information; participants sometimes used the message history to revisit decisions and bugs detected.
One suggested that it could support novice programmers testing their software, and another mentioned that it could help teams without QAs.
Finally, there were suggestions to add support for other testing tasks, like managing test scripts, tracking the status of test executions, and communication with stakeholders.
Answer to RQ3: The participants perceived as a valuable resource while performing exploratory testing. Overall, the participants' perceptions were positive concerning the features, the ease of interaction, and testing resources.
§ CONCLUDING REMARKS
This paper presents an initial effort on using chatbots to support exploratory software testing. We implemented the first version of and evaluated it with six software development professionals. The results gave evidence that chatbot-enabled ET has potential to be as effective as similar approaches and received positive feedback from the participants.
We recognize the limitations of this study, yet we intend to draw two main future initiatives from the preliminary findings obtained.
First, future replications and extended controlled experiments are needed to better assess the impact of chatbots in software testing.
Then, could be evolved with modern technologies so that it improves its testing support and interaction capabilities.
One direction is to include the ability of observing the SUT (using e.g. monitoring or dynamic analyses as in <cit.>). This ability would be used to feed the chatbot and provide more educated insights.
On the human-bot interaction side, LLMs (like ChatGPT, Bard) and other similar technologies could be adopted to make the conversations more human-like, and provide a broader (and personalized) access to software testing knowledge.
§ ACKNOWLEDGMENTS
Andre T. Endo is partially supported by grant #2023/00577-8, São Paulo Research Foundation (FAPESP).
Yohan Pessanha is supported by grant #2022/13469-6, São Paulo Research Foundation (FAPESP) and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) grant 88887.801592/2023-00.
unsrt
|
http://arxiv.org/abs/2307.04640v1 | 20230710153623 | Properties of the $η_q$ leading-twist distribution amplitude and its effects to the $B/D^+ \toη^{(\prime)}\ell^+ ν_\ell$ decays | [
"Dan-Dan Hu",
"Xing-Gang Wu",
"Hai-Bing Fu",
"Tao Zhong",
"Zai-Hui Wu",
"Long Zeng"
] | hep-ph | [
"hep-ph"
] |
[email protected]
[email protected]
Department of Physics, Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 401331, P.R. China
[email protected]
[email protected]
[email protected]
Department of Physics, Guizhou Minzu University, Guiyang 550025, P.R. China
[email protected]
Department of Physics, Chongqing Key Laboratory for Strongly Coupled Physics, Chongqing University, Chongqing 401331, P.R. China
The η^(')-mesons in the quark-flavor basis are mixtures of two mesonic states |η_q⟩=|u̅ u+d̅ d⟩/√(2) and |η_s⟩=|s̅ s⟩. In the previous work, we have made a detailed study on the η_s leading-twist distribution amplitude. As a sequential work, in the present paper, we fix the η_q leading-twist distribution amplitude by using the light-cone harmonic oscillator model for its wave function and by using the QCD sum rules within the QCD background field to calculate its moments. The input parameters of η_q leading-twist distribution amplitude ϕ_2;η_q at an initial scale μ_0∼ 1 GeV are then fixed by using those moments. The sum rules for the 0_ th-order moment can also be used to fix the magnitude of η_q decay constant, which gives f_η_q=0.141±0.005 GeV. As an application of the present derived ϕ_2;η_q, we calculate the transition form factors B(D)^+ →η^(') by using the QCD light-cone sum rules up to twist-4 accuracy and by including the next-to-leading order QCD corrections to the twist-2 part, and then fix the related CKM matrix element and the decay width for the semi-leptonic decays B(D)^+ →η^(')ℓ^+ ν_ℓ.
13.25.Hw, 11.55.Hx, 12.38.Aw, 14.40.Be
Properties of the η_q leading-twist distribution amplitude and its effects to the B/D^+ →η^(')ℓ^+ ν_ℓ decays
Long Zeng
August 12, 2023
============================================================================================================
§ INTRODUCTION
The mixing of η and η' mesons is essential to disentangle the standard model (SM) hadronic uncertainties with the new physics beyond the SM. It involves the dynamics and structure of the pseudoscalar mesons that has two mixing modes η-η' and η-η'-G, both of which have important theoretical significance. These mixings are caused by the QCD anomalies and are related to the breaking of chiral symmetry. However, since the matrix element of the exception operator is mainly non-perturbative, it still has not been calculated reliably. One may turn to phenomenological studies to obtain useful information on the non-perturbative QCD theory <cit.>. At present, the η-η'-G mixing mode has been studied in detail in Refs <cit.>. As for the η-η' mixing model, one can investigate it by using two distinct schemes, namely the singlet-octet (SO) scheme and the quark-flavor (QF) scheme. These two schemes reflect different understandings of the essential physics and they are related with a proper rotation of an ideal mixing angle <cit.>. Practically, a dramatic simplification can be achieved by adopting the QF scheme <cit.>, especially, the decay constants in the quark-flavor basis simply follow the same pattern of the state mixing due to the OZI-rule. In QF scheme, the physical meson states |η⟩ and |η'⟩ are related to the QF basis |η_q⟩=|u̅u+d̅ d⟩/√(2) and |η _s⟩=|s̅ s⟩ by an orthogonal transformation <cit.>,
[ |η⟩ |η'⟩ ] = [ cosϕ -sinϕsinϕ cosϕ ][ |η_q⟩ |η_s⟩ ],
where ϕ is the mixing angle. In the present paper, we shall adopt the QF scheme to do our analysis and to achieve a better understanding of the mixing mechanism between η and η'.
The B(D)→η^(') transitions are important, since they involve b→ u and c→ d transitions and are sensitive to the CKM matrix elements |V_ ub| and |V_ cd|. A more accurate determination of |V_ ub| and |V_ cd| would improve the stringency of unitarity constraints on the CKM matrix elements and provides an improved test of standard model (SM). Many measurements on |V_ ub| and |V_ cd| have been done according to various decay channels of B(D)-mesons <cit.>. Compared with the non-leptonic B(D)-meson decays, the semi-leptonic decays D^+ →η^(')ℓ^+ ν_ℓ <cit.> and B^+ →η^(')ℓ^+ ν_ℓ <cit.> are much simpler with less non-perturbative effects and can serve as helpful platforms for exploring the differences among various mechanisms.
As key components of the B(D)→η^(') semileptonic decays, the B(D)→η^(') transition form factors (TFFs) need to be precisely calculated, whose main contribution comes from the |η_q⟩-component (the |η_s⟩-component gives negligible contribution here, but will have sizable contribution for B_s (D_s) decays <cit.>). By further assuming SU_ F(3) symmetry, the TFFs f_+^B(D)→η^(') satisfy the following relation <cit.>
f_+^B(D)→η = cosϕ f_+^B(D)→η_q,
f_+^B(D)→η' = sinϕ f_+^B(D)→η_q.
The TFFs of the heavy-to-light transitions at large and intermediate momentum transfers are among the most important applications of the light-cone sum rules (LCSR) approach. Using the LCSR approach, a two-point correlation function will be introduced and expanded near the light cone x^2 → 0, whose transition matrix elements are then parameterized as the light meson's light-cone distribution amplitudes (LCDAs) of increasing twists <cit.>. It is thus important to know the properties of the LCDAs.
In the present paper, we will adopt the light cone harmonic oscillator (LCHO) model for the η_q leading-twist LCDA ϕ_2;η_q. The LCHO model is based on the Brodsky-Huang-Lepage (BHL) prescription <cit.> [The BHL-prescription is obtained in this way by connecting the equal-time wavefunction in the rest frame and the wavefunction in the infinite momentum frame, which indicates that the LCWF should be a function of the meson's off-shell energy.] for the light-cone wavefunction (LCWF), which is composed of the spin-space LCWF and the spatial one. The LCDA can be obtained by integrating over the transverse momentum from the LCWF. The parameters of ϕ_2;η_q at an initial scale will be fixed by using the derived moments of the LCDA, which will then be run to any scale region via proper evolution equation. Its moments will be calculated by using the QCD sum rules within the framework of the background field theory (BFTSR) <cit.>. The QCD sum rules method suggests to use the non-vanishing vacuum condensates to represent the non-perturbative effects <cit.>. The QCD background field approach provides a description for those vacuum condensates from the viewpoint of field theory <cit.>. It assumes that the quark and gluon fields are composed of the background fields and the quantum fluctuations around them. And the vacuum expectation values of those background fields describe the non-perturbative effects, while the quantum fluctuations represent the calculable perturbative effects. As a combination, the BFTSR approach provides a clean physical picture for separating the perturbative and non-perturbative properties of the QCD theory and provides a systematic way to derive the QCD sum rules for hadron phenomenology. At the present, the BFTSR approach has been successfully applied for dealing with the LCDAs of various mesons, some recent examples can be found in Refs.<cit.>.
The remaining parts of the paper are organized as follows. In Sec. <ref>, we give the calculation technology for the moments of the η_q leading-twist LCDA ϕ_2;η_q by using the BFTSR approach, give a brief introduction of the LCHO model of ϕ_2;η_q, and then give the LCSR to the semi-leptonic decay B(D)^+ →η_qℓ^+ ν_ℓ. In Sec. <ref>, we first determine the parameters of ϕ_2;η_q. Finally, the TFF, the decay width and the CKM matrix element of the semi-leptonic decay B(D)^+→η^(')ℓ^+ ν_ℓ will be discussed. We will also compare our results with the experimental data and other theoretical predictions. Sec. <ref> is reserved for a summary.
§ CALCULATION TECHNOLOGY
§.§ Determination of the moments ⟨ξ _2;η_q^n⟩ of the η_q twist-2 LCDA using the BFTSR
To determine the distribution amplitude, one can calculate firstly the moment of the distribution amplitude. The η^(') meson twist-2 LCDA is defined as <cit.>
⟨ 0|Ψ̅(z) C_i[z, - z] z γ _5Ψ ( - z)|η^(') (q)⟩
= i(z · q)f_η∫_0^1 dxe^i(2x - 1)(z · q)ϕ _2;η^(')(x,μ)
where Ψ=(u,d,s) represents the triplet of the light-quark fields in the flavour space, [z,-z] is the path-ordered gauge connection which ensures the gauge invariance of the operator, and ϕ _2;η^(')(x,μ) is the twist-2 LCDA of the η meson with respect to the current whose flavour content is given by C_i(i= q,s). And we have C_q=(√(2) C_1+ C_8)/√(3) and C_s=( C_1-√(2) C_8)/√(3) with C_1=1/√(3) and C_8=λ_8/√(2) which are derived in singlet-octet scheme <cit.>, where λ_8 is the standard Gell-Mann matrix and 1 is 3×3 unit matrix. The η^(')-meson twist-2 two-quark LCDAs are symmetric in the QF basis <cit.>.
In line with the implementation of the QF scheme for the η_q twist-2 LCDA, an approximation is implicitly adopted, i.e. ⟨ 0|Ψ̅(z) C_q [z, - z] zγ_5 Ψ(-z)|η_q(q)⟩ = ⟨ 0|u̅(z)[z, - z] zγ_5 d( - z)|π ^ - (q)⟩ <cit.>. That is, the definition of the η_q meson is the same as that of the π^0 meson. According to the definition, we have
C_q/√(2)⟨ 0|[u̅(0) zγ_5 (iz · D)^nu(0) + d̅(0) zγ_5 (iz · D )^nd(0)]|η_q(q)⟩
= i(z· q)^n+1f_η_q⟨ξ _2;η_q^n⟩|_μ ,
where μ is an initial scale. The η_q twist-2 LCDA ϕ _2;η_q and the n_ th-order moment satisfy the equation,
⟨ξ _2;η_q^n⟩|_μ = ∫_0^1 dx (2x-1)^n ϕ _2;η_q (x,μ).
Once the insert current is determined, the first step is to construct the correlation function (correlator)
Π _2;η_q^(n,0) = i∫d^4xe^iq · x⟨ 0|T{J_n(x),J_0^† (0)} |0⟩
= (z · q)^n + 2Π _2;η_q^(n,0)(q^2),
For the QF basis, one may have two independent axial vector currents J_μ5^q (q=u,d) and J_μ5^s. We have discussed J_μ5^s in our previous work <cit.> for the case of η_s, and in this paper, we will focus on J_μ5^q (q=u,d) for the present case of η_q. Then the required currents in the correlator can be defined as J_n(x) = C_q/√(2) [u̅(x) zγ_5 (iz · D )^nu(x) +d̅(x) zγ_5 (iz · D )^nd(x)]=u̅(x) zγ_5 (iz · D )^nd(x) <cit.>, where z^2=0. It is found that even moments are non-zero and the odd moments of the LCDA are zero because of the G-parity, then only the n=(0,2,4,…) will be considered.
For the second step, the correlator can be calculated by inserting a complete set of intermediate hadronic states in physical region. Based on the quark-hadron duality, the hadron expression can be obtained
Im I_2;η_q, Had^(n,0)(q^2) = πδ (q^2 - m̃_η_q^2)f_η_q^2⟨ξ _2;η_q^n⟩|_μ⟨ξ _2;η_q^0⟩|_μ
+ π34π ^2(n + 1)(n + 3)θ (q^2 - s_η_q)
Because of the SU(3) flavour symmetric, here m̃_η_q is the η_q effective mass <cit.>, f_η_q is the decay constant of η_q and s_η_q stands for the continuum threshold.
For the third step, one can apply the operator product expansion (OPE) to deal with the correlator in the deep Euclidean region. It is calculable and can be carried out within the framework of BFTSR. Detailed calculation processes can be found in Ref. <cit.>. The fourth step is to match the hadron expression corresponding to the correlator and the results obtained by OPE using the dispersion relation. After applying the Borel transformation for both sides so as to suppress the unwanted contributions from the even higher-order dimensional condensates, the sum rules for the moments of the η_q leading-twist LCDA ϕ _2;η_q(x,μ ) can be finally obtained, which takes the following form
⟨ξ _2;η_q^n⟩ |_μ⟨ξ _2;η_q^0 ⟩|_μ = M^2/f_η_q^2 e^m̃_η_q^2/M^2{3/4π^2(n+1)(n+3)(1-e^-s_η_q/M^2) + (m_u + m_d)⟨q̅q⟩/M^4 + ⟨α_sG^2⟩/12πM^41 + nθ (n - 2)/n + 1
- (m_u + m_d)⟨ g_sq̅σ TGq⟩/M^68n + 1/18 + ⟨ g_sq̅q⟩^2/M^6 4(2n + 1)/18 - ⟨ g_s^3fG^3⟩/M^6 nθ (n - 2)/48π ^2 + ⟨ g_s^2q̅q⟩^2/M^6 2 + κ ^2/486π ^2
×{ - 2 (51n + 25)( - lnM^2/μ^2) + 3 (17n + 35) + θ (n - 2)[2n( - lnM^2/μ^2)+ 49n^2 + 100n + 56/n
- 25(2n + 1)[ψ(n + 1/2) - ψ(n/2)+ ln 4]]}}.
It has been shown that due to the anomalous dimension of the n_ th-order moment grows with increment of n, the contribution of the much higher moments at the large momentum transfer shall be highly suppressed <cit.>. Thus one may only need to calculate the first few ones. Specifically, the sum rule of the 0_ th-order moment is
(⟨ξ_2;η_q^0⟩|_μ )^2= M^2/f_η_q^2e^m̃_η_q^2/M^2{1/4 π^2(1 - e^-s_η_q/M^2)
+ (m_u + m_d)⟨q̅q⟩/M^4 - (m_u + m_d)⟨ g_sq̅σ TGq⟩/18 M^6
+ ⟨α_s G^2 ⟩/12π M^4 +4⟨ g_sq̅q⟩^2/18M^6 + ⟨ g_s^2q̅q⟩^2/M^6 2+κ^2/486π ^2
×[-50(-lnM^2/μ^2)+105]}.
Due to the particularity of quark composition of η-meson, we take the η_q mass appeared in Eqs. (<ref>) and (<ref>) as its effective mass 370 MeV <cit.>. We use the equation ⟨ξ _2;η_q^n⟩ |_μ =⟨ξ_2;η_q^n⟩ |_μ⟨ξ _2;η_q^0⟩|_μ/√((⟨ξ _2;η_q^0⟩ |_μ )^2) to calculate the moment <cit.>. The decay constant is an important input for the B(D)→η^(') TFFs, which has been calculated under different methods such as the LCSR <cit.>, the QCD sum rules (QCD SR) <cit.>, the light-front quark model (LFQM) <cit.>, the lattice QCD (LQCD) <cit.>, the Bethe-Salpeter (BS) model <cit.>, the relativistic quark model (RQM) <cit.>, the non-relativistic quark model (NRQM) <cit.>, and etc.. As for the decay constant f_η_q, those studies shows that f_η_q is within a broader range [0.130,0.168] GeV. At present, the sum rule of the η_q decay constant can be inversely obtained by using Eq.(<ref>). The ⟨ξ _2;η_q ^0⟩ |_μ should be normalized in a suitable Borel window, which will be treated as an important criteria for determining the η_q decay constant.
§.§ The LCHO model for η_q twist-2 LCDA
The meson's LCDA can be derived from its light-cone wave-function (LCWF) by integrating its transverse components. It is helpful to construct the η_q leading-twist LCWF and then get its LCDA <cit.>. Practically, the η_q wave-function can be constructed by using the BHL prescription, and the LCHO model takes the form <cit.>:
ψ_2;η_q(x,𝐤_) = χ _2;η_q (x,𝐤_)ψ _2;η^R(x,𝐤_),
where 𝐤_ is the η_q transverse momentum, χ _2;η_q(x,𝐤_) stands for the spin-space WF that comes from the Wigner-Melosh rotation and the spatial WF ψ _2;η_q ^R(x,𝐤_) comes from the approximate bound-state solution in the quark model for η_q, which the detailed expressions can be found in Ref <cit.>. Using the following relationship between the η_q twist-2 LCDA and LCWF,
ϕ _2;η_q(x,μ) = 2√(6)/f_η_q∫_0^|𝐤_|^2 ≤μ^2d^2 𝐤_/16π^3ψ _2;η_q(x,𝐤_),
and by integrating over the transverse momentum 𝐤_, one can get the twist-2 LCDA ϕ _2;η_q(x,μ ), which can be read off,
ϕ _2;η_q(x,μ) = √(3) A_2;η_qm_qβ _2;η_q/2√(2)π^3/2 f_η_q√(xx̅)φ _2;η_q(x)
×{ Erf[√(m_q^2 + μ^2/8β _2;η_q^2xx̅)]- Erf[√(m_q^2/8β _2;η_q^2xx̅)]}.
where q=(u, d), m_q is the constituent quark mass. The main difference of the model parameters is the constituent quark mass, i.e. m_u=m_d=250 MeV in the spin-averaged meson mass scheme <cit.>, m_u=m_d=330 GeV in the invariant meson mass scheme <cit.> and m_u=m_d=300 MeV for the simplest in Refs <cit.>. In principle, the hadron function determines all properties of hadrons. From the relation between wavefunction and measurability, we can obtain some constraints on the general properties of hadronic function. We will constraint the parameters A_2;η_q and β _2;η_q according to the following two constraints.
Both the pseudoscalar and vector mesons one constraint on the wavefunction is from the leptonic decay processes. The WF normalization condition provided from the process η_q→μν
∫_0^1 dx∫d^2 𝐤_/16π^3ψ_2;η_q(x,𝐤_) = f_η_q/2√(6).
The second constraint is the most natural one: the probability of finding the qq̅ Fock state in a meson should be not larger than 1,
P_η_q =∫_0^1 dx∫d^2 𝐤_/16π^3 |ψ_2;η_q(x,𝐤_)|^2
= A_2;η_q^2m_q^2/32π^2[φ _2;η_q(x)]^2Γ[0,m_q^2/4β_2;η_q^2xx̅].
Since pionic twist-2 wavefunction conforms to the probability P_π≈ 0.3 <cit.>, we adopt P_η_q≈0.3 to carry out the following calculation. Equivalently, one can replace the constraint (<ref>) by the quark transverse momentum ⟨𝐤_ ^2⟩ _η_q, which is measurable and is defined as <cit.>
⟨𝐤_ ^2⟩ _η_q = ∫_0^1 dx ∫d^2 𝐤_/16π^3 |𝐤^2_| ψ_2;η_q^R(x,𝐤_)^2/P_η_q
=∫_0^1 dx 4exp[- m_q^24xx̅β _2;η_q^2]xx̅β_2;η _q^2/Γ[0,m_q^24xx̅β_2;η_q^2] - m_q^2
where the incomplete gamma function Γ [s,x] = ∫_0^x t^(s-1) e^-t dt.
The function φ _2;η_q(x) determines the dominant longitudinal behavior of ϕ _2;η_q(x,μ^2), which can be expanded as a Gegenbauler series as
φ _2;η_q(x) =[1 + ∑_n B_n × C_n^3/2(2x - 1) ],
For self-consistency, it has been found that the parameters B_n are close to their corresponding Gegenbauer moment, i.e. B_n ∼ a_n, especially for the first few ones <cit.>. The η_q meson Gegenbauer moments can be calculated by the following way
a_2;η_q^n(μ)=∫_0^1 dxϕ _2;η_q(x,μ)C_n^3/2(2x-1)/∫_0^1 dx6x(1-x)[C_n^3/2(2x-1)]^2
The Gegenbauer moments a_2;η_q^n(μ) and the DA moments ⟨ξ _2;η_q^n⟩ |_μ satisfy the following relations
⟨ξ _2;η_q^2⟩ |_μ =1/5+12/35a_2;η_q^2(μ)
⟨ξ _2;η_q^4⟩ |_μ =3/35+8/35a_2;η_q^2(μ)+8/77a_2;η_q^4(μ)
···
By using the sum rules (<ref>) of ⟨ξ _2;η_q^n⟩ |_μ, one can determine the values of a_2;η_q^n(μ), which then can be used to fix the values of B_n. In the following we will adopt the given two Gegenbauer moments a^2,4_2;η_q to fix the parameters B_2,4.
§.§ The B(D)^+→η_q ℓ^+ ν_ℓ TFFs using the LCSR
The LCSR approach is an effective tool in determining the non-perturbative properties of hadronic states. Here and after, we use the symbol “H” to indicate the B(D)-meson for convenience. Following the LCSR approach, one should first construct a correlator with the weak current and a current with the quantum numbers of the H meson that are sandwiched between the vacuum and η_q state. More explicitly, for H→η_q, we need to calculate the correlator
Π_μ(p,q) =i∫d^4 xe^iqx⟨η_q (p)|T{u̅(x)γ _μQ(x), j_H(0)} |0⟩
= Π[q^2, (p+q)^2] p_μ + Π̃[q^2, (p+q)^2] q_μ.
where j_H=(m_Q Q̅ iγ_5 d) with Q = (b,c)-quark for (B,D) meson, respectively. The LCSR calculation for the B(D)^+ →η_q TFFs is similar to the case of B_s(D_s)→η_s, which has been done in Ref.<cit.>. In the following, we will give the main procedures for self-consistency, and the interesting reader may turn to Ref.<cit.> for more detail.
The dual property of the correlator (<ref>) is used to connect the two different representations in different momentum transfer regions. In the time-like region, one can insert a complete set of the intermediate hadronic states in the correlator and obtain its hadronic representation by isolating out the pole term of the lowest meson state, i.e.
Π_μ^ had(p,q)=⟨η_q (p)|u̅γ_μ Q|H(p+q)⟩⟨ H(p+q)|Q̅iγ_5q|0⟩/m_H^2-(p+q)^2
+∑_ H⟨η_q (p)|u̅γ_μ Q|H^ H(p+q)⟩⟨ H^ H(p+q)|Q̅ iγ_5q|0⟩/m_H^ H^2-(p+q)^2
= Π^ had[q^2,(p+q)^2]p_μ+Π^ had[q^2,(p+q)^2]q_μ,
where the superscript “had" and “H" stand for the hadronic expression of the correlator and the continuum states of heavy meson, respectively. Here, the decay constant of B(D)-meson is defined via the equation, ⟨ H|Q̅iγ_5q|0⟩ = m_H^2 f_H/m_Q, and by using the hadronic dispersion relations in the virtuality (p+q)^2 of the current in the B(D) channel, we can relate the correlator to the H→η_q matrix element <cit.>
⟨η_q (p)|u̅γ_μ Q| H(p+q)⟩ = 2p_μ f^H→η_q_+(q^2)
+ q_μ( f^H→η_q_+(q^2) + f^H→η_q_- (q^2)).
Due to chiral suppression, only the first term contributes to the semileptonic decay of H→η_q with massless leptons in the final state. Then, the hadronic expression for the invariant amplitude can be written as
Π[q^2,(p+q)^2] = 2m_H^2 f_H f_+^H→η_q (q^2)/ [m_H^2 - (p+q)^2]p_μ
+ ∫_s_0^∞ ds ρ^ H (q^2,s)/s - (p+q)^2,
where s_0 is continuum threshold parameter, ρ^ H is the hadronic spectral density.
In the space-like region, the correlator can be calculated by using the operator production expansion (OPE). The OPE near the light cone x^2 ≈ 0 leads to a convolution of perturbatively calculable hard-scattering amplitudes and universal soft LCDAs. Since the contributions of the three-particle part is small <cit.>, we only calculate the two-particle part here, and the corresponding matrix element is <cit.>
⟨η_q (p)|u̅_α ^i(x)d_β ^j(0)|0⟩ = iδ ^ij/12f_η_q∫_0^1 due^iup · x{[ pγ _5]_βαϕ _2;η_q
(u)-[γ _5]_βαμ _η_qϕ _3;η_q ^p(u) + 1/6[σ _ντγ _5]_βαp_νx_τμ _η_qϕ _3;η_q ^σ (u)
+ 1/16 [ pγ _5]_βαx^2ϕ _4;η_q (u) - i/2[ xγ _5]_βα∫_0^u ψ _4;η_q (v)dv}
The light-cone expansion for q^2, (p+q)^2 ≪ m_b^2 (or m_c^2), the correlator Π^ OPE can be written in the general form
Π^ OPE[q^2,(p+q)^2] = F_0(q^2,(p+q)^2)
+ α_s C_F/4π F_1(q^2,(p+q)^2).
In the above equation, the first term is the leading-order (LO) for all the LCDAs' contributions, and the second term stands for the gluon radiative corrections to the dominant twist-2 parts.
After an analytic continuation of the light-cone expansion to physical momenta using a dispersion relations, one equates the above two representations by the assumption of quark-hadron duality. Then to get the final LCSR, we need to do the Borel transformation, which results in
f^H→η_q_+ (q^2) = e^m_H^2/M^2/2m_H^2 f_H[ F_0(q^2,M^2,s_0)
+α_s C_F/4π F_1(q^2,M^2,s_0)],
where F_0 (F_1) represents the leading-order (LO) or next-to-leading order (NLO) contribution, respectively. Our final LCSR for the H→η_q TFF is
f^H→η_q_+(q^2) = m_Q^2 f_η_q/2m_H^2 f_He^m_H^2/M^2∫_u_0^1 du e^-s(u)/M^2{ϕ_2;η_q(u)/u + μ_η_q/m_Q[ϕ_3;η_q^p(u) + 1/6 (2ϕ_3;η_q^σ (u)/u - m_Q^2+q^2-u^2m_η^2/m_Q^2-q^2+u^2m_η^2
×d/duϕ_3;η_q^σ (u) +4um_η ^2m_Q^2/(m_Q^2 - q^2 + u^2m_η^2)^2ϕ_3;η_q^σ (u))] + 1/m_Q^2-q^2+u^2 m_η^2[uψ_4;η_q(u) +(1-
2 u^2 m_η^2/m_Q^2 - q^2 + u^2 m_η^2)
×∫_0^u dv ψ_4;η_q(v) - m_Q^2/4u/m_Q^2 - q^2 + u^2m_η^2(d^2/du^2 - 6um_η ^2/m_Q^2-q^2+u^2m_η^2 d/du + 12um_η^4/(m_Q^2 - q^2 + u^2m_η ^2)^2)ϕ_4;η_q(u)]
+ α_s C_F e^m_H^2/M^2/8π m_H^2 f_H F_1(q^2,M^2,s_0),
where u̅=(1-u), μ_η_q=m^2_η/(m_u+m_d), s(u) = ( m_Q^2 - u̅ q^2 + uu̅ m_η^2 )/u and u_0 = (q^2 - s_0 + m_η ^2 + √((q^2 - s_0 + m_η ^2 )^2 - 4m_η ^2(q^2 - m_Q^2)))/2m_η^2. The invariant amplitude F_1(q^2,M^2,s_0) has been given in Ref. <cit.>, which can be written as a factorized form of the convolutions. As will be shown below, the high-twist terms will have quite small contributions to compare with the leading-twist terms, thus we will not discuss the uncertainties caused by the different choices of the high-twist LCDAs. For convenience, we take the η_q twist-3 LCDAs ϕ_3;η_q^p(u), ϕ_3;η_q^σ (u), and the twist-4 LCDAs ψ_4;η_q(u), ϕ_4;η_q(u), together with their parameters from Ref. <cit.>.
Using the resultant B(D)→η^(') TFFs, one can extract the CKM matrix element |V_ cd| or |V_ ub| by comparing with the predictions with the experimental data, i.e. via the following equation <cit.>
B(H→η^(')ℓν_ℓ )/τ (H) = ∫_0^q^2_ maxdΓ/dq^2 (H→η^(')ℓν_ℓ),
where τ (H) is the H-meson lifetime, and the maximum of the squared momentum transfer q^2_ max = (m_H - m_η^('))^2.
§ NUMERICAL ANALYSIS
§.§ Input parameters
We adopt the following parameters to do the numerical calculation. According to the Particle Data Group (PDG) <cit.>, we take the charm-quark mass m_c(m̅_c)=1.27±0.02, b-quark mass m_b(m̅_b)=4.18^+0.03_-0.02 GeV; the η, η', D and B-meson masses are m_η =0.5478 GeV, m_η'=0.9578 GeV, m_D^+=1.870 GeV and m_B^+=5.279 GeV, respectively; the lifetimes of D^+ and B^+ mesons are τ (B^ + )=1.638±0.004 ps and τ(D^ + )=1.033±0.005 ps, respectively; the current-quark-masses for the light u and d-quarks are m_u =2.16^+0.49_-0.26 MeV and m_d =4.67^+0.48_-0.17 MeV at the scale μ =2 GeV. As for the decay constants f_B and f_D, we take f_B =0.215^+0.007_-0.007 GeV <cit.> and f_D=0.142±0.006 <cit.>. The renormalization scale is set as the typical momentum flow μ_B=√(m^2_B-m̅_b^2)≈ 3 GeV for B-meson decay or μ_D ≈ 1.4 GeV for D-meson decay. We also need to know the values of the non-perturbative vacuum condensates up to dimension-six, which include the double-quark condensates ⟨ qq̅⟩ and ⟨ g_sq̅q⟩ ^2, the quark-gluon condensate ⟨ g_sq̅σ TGq⟩, the four-quark condensate ⟨ g_s^2q̅q⟩ ^2, the double-gluon condensate ⟨α_s G^2 ⟩ and the triple-gluon condensate ⟨ g_s^3fG^3⟩, and etc. We take their values as <cit.>,
⟨ qq̅⟩ = (-2.417_-0.114^+0.227)× 10^-2 GeV^3 ,
⟨ g_sq̅q⟩ ^2 = (2.082_-0.697^+0.734)× 10^-3 GeV^6 ,
⟨ g_sq̅σ TGq⟩ =(-1.934_-0.103^+0.188)× 10^-2 GeV^5 ,
⟨ g_s^2q̅q⟩ ^2 = (7.420_-2.483^+2.614)× 10^-3 GeV^6 ,
⟨α_s G^2 ⟩ = 0.038±0.011 GeV^4 ,
⟨ g_s^3fG^3⟩ ≈ 0.045 GeV^6 .
The ratio κ = ⟨ ss̅⟩/⟨ qq̅⟩= 0.74±0.03 is given in Ref. <cit.>. In order to make the calculation more accurate, every vacuum condensates and current quark masses need to be run from their initial values at the scale μ_0 to the required scale by using the renormalization group equations (RGE) <cit.>.
§.§ The η_q decay constant and the moments ⟨ξ _2;η_q^n⟩
The continuum threshold parameter (s_0) and the Borel parameter M^2 are two important parameters for the sum rules analysis. When calculating the decay constant f_η_q, one may set its continuum threshold to be close to the squared mass of the η' meson, i.e. s_0=0.95±0.1 GeV^2 <cit.>. To determine the allowable M^2 range, e.g. the Borel window, for the η_q decay constant, we adopt the following criteria,
* The continuum contribution is less than 30%;
* The contributions of the six-dimensional condensates are no more than 5%;
* The value of f_η_q is stable in the Borel window;
* The ⟨ξ _2;η_q^0⟩ |_μ_0 is normalized in the Borel window, e.g. ⟨ξ^0_2;η_q ⟩|_μ_0=1.
We put the curves for the decay constant f_η_q versus the Borel parameter M^2 in Fig. <ref>, where the shaded band indicates the uncertainties from the errors of all the mentioned input parameters. The decay constant is flat in the allowable Borel window, which confirms the third criterion. Using the above four criteria and the chosen continuum threshold parameter, we put the numerical results of f_η_q in Table <ref>. As a comparison, we also present several predictions using the QCDSR and LQCD approaches. Our predictions are in good agreement with the QCDSR 2000 <cit.> and the LQCD 2021 predictions within errors <cit.>. The reason why we are slightly different from QCDSR 2000 is that their calculation only includes the contributions up to five dimensional operators, and our present one includes the dimension-6 vacuum condensation terms. Using the determined f_η_q, we then determine the moments of its twist-2 LCDA. Similarly, several important conditions need to be satisfied before the moments of η_q LCDA can be determined <cit.>.
Furthermore, in order to search for a suitable Borel window for the moments, one can take the similar criteria adopted for the traditional sum rules, i.e. keeping the dimension-six condensate's contribution to be no more than 5% and the continuum contribution to be no more than 40%. To determine the first two LCDA moments ⟨ξ _2;η_q^n⟩|_μ_0 with n=(2,4), we set the continuum contributions to be less than 35% and 40%, respectively. We find that the allowable Borel windows for the two moments ⟨ξ _2;η_q^2,4⟩|_μ are M^2∈[1.782,2.232] and M^2∈[2.740,3.258], respectively. Numerical results of the first two moments ⟨ξ _2;η_q^2,4⟩|_μ can be obtained, which at the initial scale μ_0 are
⟨ξ _2;η_q ^2⟩ |_μ_0= 0.253±0.014,
⟨ξ _2;η_q ^4⟩ |_μ_0= 0.127±0.010.
§.§ The LCHO model parameters for ϕ_2;η_q
Combining the normalization condition (<ref>), the probability formula for qq̅ Fock state P_η_q≈0.3, and the moments ⟨ξ _2;η_q^(2,4)⟩|_μ_0 shown in Eqs.(<ref>, <ref>), the determined LCHO parameters are shown in Table <ref> and their corresponding LCDA ϕ_2;η_q is given in Fig. <ref>. Its behavior of one peak with two humps is caused by a_2;η_q ^2(μ _0)=0.156±0.042 and a_2;η_q^4(μ _0)=0.055±0.005, which is given by using their relations (<ref>) to the moments ⟨ξ _2;η_q^n⟩ that can be calculated by using the sum rules (<ref>). In this paper, we take m_q=300 MeV to do the following calculation and use Δ m_q=± 50 MeV to estimate its uncertainty. Table <ref> shows that the parameters B_2 and B_4 and the quark transverse momentum ⟨𝐤_^2⟩ _η_q increase with the increment of constituent quark mass, but the harmonious parameter β _2;η_q decreases gradually. Experimentally the average quark transverse momentum of pion, ⟨𝐤_ ^2⟩_π, is of the order (300 MeV)^2 approximately <cit.>. So it is reasonable to require √(⟨𝐤_^2⟩_η_q) have the value of about a few hundreds MeV <cit.>. For the case of m_q=300±50 MeV, we numerically obtain ⟨𝐤_^2⟩_η_q=0.123^+0.003_-0.002≈ (351^+4_-3 MeV)^2, which is reasonable and in some sense indicates the inner consistency of all the LCHO model parameters. Moreover, by using the RGE, one can get the ϕ_2;η_q(x, μ) at any scale μ <cit.>. Fig. <ref> shows the LCDA ϕ_2;η_q at several typical scales with m_q=300 MeV. At low scale, it shows double humped behavior and when the scale μ increases, the shape of ϕ_2;η_q becomes narrower; and when μ→∞, it will tends to single-peak asymptotic behavior for the light mesons <cit.> ϕ^ as_η_q(x,μ)|_μ→∞=6x(1-x).
We make a comparison of the properties of the LCHO model of the twist-2 LCDA ϕ_2;η_q with other theoretical predictions in Fig. <ref>. Fig. <ref> gives the results for μ=μ_0=1 GeV, where the asymptotic form <cit.>, the CZ form <cit.> and the behaviors given by the LCSR 2007 <cit.> and LCSR 2015 <cit.> are presented. For the LCSR 2007 result, its double peaked behavior is caused by the keeping its Gegenbauer expansion only with the first term together with the approximation a_2;η_q^2(μ _0)=a_2;η'_q^2(μ _0)=0.25 <cit.>. For the LCDA used in LCSR 2015 <cit.>, its behavior is close to our present one. It is obtained by using the approximation that the twist-2 LCDA ϕ_2;η_q has the same behavior as that of the pion twist-2 LCDA ϕ_2;π, e.g. a_2;η_q^2(μ _0)=a_2;π^2(μ _0)=0.17 and a_2;η_q^4(μ _0)=a_2;π^4(μ _0)=0.06, which are consistent with our Gegenbauer moments within errors [Since the twist-2 parts dominant the TFFs, this consistency also explains why our following LCSR predictions for the TFFs are close in shape with those of Ref.<cit.>.].
§.§ The TFFs and observable for the semileptonic decay B(D)^+→η^(')ℓ^+ν_ℓ
One of the most important applications of the η_q-meson LCDAs is the semileptonic decay H^+→η^(')ℓ^+ν_ℓ, whose main contribution in the QF scheme comes from the |η_q⟩-component. Here H^+ stands for B^+ or D^+, respectively. And to derive the required H^+→η^(') TFFs, we take the mixing angle ϕ=(41.2^+0.05_-0.06)^∘ <cit.>.
The continuum threshold s^H→η^(')_0 and Borel parameters M^2 are two important parameters for the LCSR of the TFFs. As usual choice of treating the heavy-to-light TFFs, we set the continuum threshold as the one near the squared mass of the first excited state of D or B-meson, accordingly. And to fix the Borel window for the TFFs, we require the contribution of the continuum states to be less than 30%. The determined values agree with Refs.<cit.>, and we will take the following values to do our discussion
s_0^D→η= 7.0±0.5 GeV^2, M^2_D→η = 3.0±0.5 GeV.
s_0^D→η' = 7.0±0.5 GeV^2, M^2_D→η' = 3.0±0.5 GeV.
s_0^B→η = 37.0± 1.0 GeV^2, M^2_B→η = 18.0±2.0 GeV.
s_0^B→η' = 37.0±1.0 GeV^2, M^2_B→η' = 18.0±2.0 GeV.
Using Eqs.(<ref>, <ref>) together with the LCSR (<ref>) for the TFF f^H→η_q_+(q^2), we then get the results for f_+^H→η^(')(q^2), where H represents B or D, respectively. Fig. <ref> shows how the total TFFs f_+^H→η^(')(q^2) change with the increment of q^2, in which the twist-2 up to NLO QCD corrections, the twist-3 and the twist-4 contributions have been presented separately. Fig. <ref> shows that the twist-2 terms dominant the TFFs. We also find that the NLO QCD corrections to the twist-2 terms are sizable and should be taken into consideration for a sound prediction. For examples, at the large recoil point, the twist-2 NLO terms give about 15.8% (17.6%) and 6.4% (7.2%) contributions to the total TFFs f_+^D→η^(')(0) and f_+^B→η^(')(0), respectively. Table <ref> gives our present LCSR predictions for the TFFs f_+^D→η^(')(0) and f_+^B→η^(')(0). As a comparison, we have also presented the results derived from various theoretical approaches and experimental data in Table <ref>, including the LCSR approach <cit.>, the pQCD approach <cit.>, the covariant light front (CLF) approach <cit.>, the light front quark model (LFQM) approach <cit.>, the covariant confining quark mode (CCQM) approach <cit.>, and the BES-III Collaboration <cit.>. The uncertainties of the TFFs f_+^H→η^(')(0) caused by different input parameters are listed as follows,
f_+^B→η(0) = 0.145(_-0.004^+0.004)_s_0(_-0.002^+0.002)_M^2(_-0.007^+0.007)_m_b f_B
(_-0.005^+0.005)_f_η_q(_-0.0001^+0.0001)_ϕ
= 0.145_-0.010^+0.009,
f_+^B→η'(0) = 0.128(_-0.003^+0.003)_s_0(_-0.002^+0.002)_M^2(_-0.006^+0.006)_m_b f_B
(_-0.005^+0.005)_f_η_q(_-0.0001^+0.0002)_ϕ
= 0.128_-0.009^+0.008,
f_+^D→η(0) = 0.329 (_-0.004^+0.003)_s_0 (_-0.005^+0.009)_M^2 (_-0.009^+0.016)_m_c f_D
(_-0.010^+0.010)_f_η_q(_-0.0003^+0.0002)_ϕ
= 0.329_-0.015^+0.021,
f_+^D→η'(0) = 0.294(_-0.004^+0.003)_s_0(_-0.005^+0.009)_M^2(_-0.011^+0.017)_m_c f_D
(_-0.009^+0.009)_f_η_q(_-0.0003^+0.0002)_ϕ
= 0.294_-0.015^+0.021.
Here the second equations show the squared averages of all the mentioned errors.
The physically allowable ranges of the above four heavy-to-light TFFs are m_ℓ ^2 ≤q^2≤(m_D^ + - m_η)^2≈ 1.75 GeV^2, m_ℓ ^2 ≤q^2≤(m_D^ + - m_η' )^2≈ 0.84 GeV^2, m_ℓ ^2 ≤q^2≤(m_B^ + - m_η)^2≈ 22.40 GeV^2 and m_ℓ ^2 ≤q^2≤(m_B^ + - m_η' )^2≈ 18.67 GeV^2, respectively. The LCSR approach is applicable in low and intermediate q^2 region, which however can be extended to whole q^2 region via proper extrapolation approaches. In the present paper, we adopt the converging simplified series expansion (SSE) proposed in Refs.<cit.> to do the extrapolation, which suggest a simple parameterization for the heavy-to-light TFF, e.g.
f_+^H→η^(')(q^2) = 1/1 - q^2/m_R^*^2∑_k b_k z^k(t,t_0)
where m_R^*=m_B^*=5.325 GeV (m_D^*=2.010 GeV) <cit.> are vector meson resonances, z(t,t_0) is a function
z(t,t_0) = √(t_+ - t) - √(t_+ - t_0)/√(t_+ -t) + √(t_+ - t_0).
Here t_± = (m_H^+±m_η^('))^2 and t_0 = t_+ (1 - √(1 - t_-/t_+)) is a free parameter. The free parameter b_k can be fixed by requiring Δ <1%, where the parameter Δ is used to measure the quality of extrapolation and it is defined as
Δ = ∑_t |F_i(t) - F_i^ fit(t)|/∑_t |F_i(t)|× 100,
where t ∈ [0,1/40, ⋯ ,40/40] × 13.0(1.0) GeV of η-meson, t ∈ [0,1/40, ⋯ ,40/40] × 11.2(0.5) GeV of η'-meson. The two coefficients b_1,2 with all input parameters are set as their central values are listed in Table <ref>. The qualities of extrapolation parameter Δ are less than ∼ 0.8%. The extrapolated TFFs in whole q^2-region are given in Fig. <ref>, where some typical theoretical and experimental results are presented as a comparison, such as CCQM <cit.>, LFQM <cit.>, LCSR 2015 <cit.>, pQCD <cit.> and BESIII 2020 <cit.>. The solid lines in Fig. <ref> denote the center values of the LCSR predictions, where the shaded areas are theoretical uncertainties from all the mentioned error sources. The thicker shaded bands represent the LCSR predictions, which have been extrapolated to physically allowable q^2-region. Fig. <ref> indicates that: 1) Our present LCSR prediction of f_+^D→η(q^2) is in good agreement with BESIII data <cit.>; 2) Our present LCSR prediction of f_+^D→η'(q^2) is consistent with the LFQM prediction <cit.> and the LCSR 2015 <cit.> predictions within errors; 3) Our present LCSR predictions of f_+^B→η^(')(q^2) are close to the LCSR 2015 prediction <cit.>, and their values at q^2= 0 are consistent with the pQCD prediction <cit.> within errors.
Fig. <ref> shows the differential decay widthes for B(D)^+→η^(')ℓ^+ν_ℓ without CKM matrix elements. As a comparison, the predictions using different theoretical approaches and the experimental data, such as CCQM <cit.>, LFQM <cit.>, LCSR <cit.> and BESIII collaboration <cit.>, are also presented. The differential decay width dΓ/|V_ cd|dq^2 (D^+→ηℓ^ + ν _ℓ) agrees with the BESIII 2018 <cit.> and BESIII 2020 <cit.> within errors.
By matching the branching fractions and the decay lifetimes given by the PDG with the decay widthes predicted by Eq.(<ref>), one may derive the CKM matrix elements |V_ub| and |V_cd|. We put our results in Table <ref>, where the errors are caused by all the mentioned error sources and the PDG errors for the branching fractions and the decay lifetimes. Some typical measured values of |V_ub| and |V_cd| are also given in Table <ref>. The predicted |V_cd| is within the error range of experimental result BESIII 2020. Using the fixed CKM matrix elements, our final predictions of the branching function are: B(D→η e ν_e ) = (1.11 ± 0.07) ×10^ - 3, B(D→ημν_μ) = (1.04 ± 0.11) ×10^ - 3, B(D→η' e ν_e) = (2.0 ± 0.4) ×10^ - 4, B(B→ηℓν_ℓ) = (3.9 ± 0.5) ×10^ - 5, B(B→η' ℓν_ℓ) = (2.3 ± 0.8) ×10^ - 5, respectively.
§ SUMMARY
In this paper, we have suggest a LCHO model (<ref>) for the η_q-meson leading-twist LCDA ϕ_2;η_q(x,μ), whose moments have been calculated by using the QCD sum rules based on the QCD background field. To compare with the conventional Gegenbauer expansion for the LCDA, the LCHO model usually has better end-point behavior due to the BHL-prescription, which will be helpful to suppress the end-point singularity for the heavy-to-light meson decays. The QCD sum rules for the 0_ th-order moment can be used to fix the η_q decay constant, and we obtain f_η_q=0.141±0.005 GeV. As an explicit application of ϕ_2;η_q, we then calculate the TFFs B(D)^+ →η^(') under the QF scheme for the η-η' mixing and by using the QCD light-cone sum rules up to twist-4 accuracy and by including the next-to-leading order QCD corrections to the dominant twist-2 part. Our LCSR prediction of TFFs are consistent with most of theoretical predictions and the recent BESIII data within errors. By applying those TFFs, we get the decay widths of B(D)^+→η^(')ℓ^+ν_ℓ. The magnitudes of the CKM matrix elements |V_ ub| and |V_ cd| have also been discussed by inversely using the PDG values for the branching fractions and the decay lifetimes. The future more precise data at the high luminosity Belle II experiment <cit.> and super tau-charm factory <cit.> shall be helpful to test all those results.
§ ACKNOWLEDGMENTS
This work was supported in part by the Chongqing Graduate Research and Innovation Foundation under Grant No. CYB23011 and No.ydstd1912, by the National Natural Science Foundation of China under Grant No.12175025, No.12265010, No.12265009 and No.12147102, the Project of Guizhou Provincial Department of Science and Technology under Grant No.ZK[2021]024 and No.ZK[2023]142, and the Project of Guizhou Provincial Department of Education under Grant No.KY[2021]030, and the Key Laboratory for Particle Physics of Guizhou Minzu University No.GZMUZK[2022]PT01.
99
Ke:2010htz
H. W. Ke, X. Q. Li and Z. T. Wei,
“Determining the η-η' mixing by the newly measured BR(D(D_s)→η(η')+l̅+ν_l”,
https://doi.org/10.1140/epjc/s10052-010-1383-6
Eur. Phys. J. C 69 (2010) 133.
Ke:2011fj
H. W. Ke, X. H. Yuan and X. Q. Li,
“Fraction of the gluonium component in η' and η”,
https://doi.org/10.1142/S0217751X11054796
Int. J. Mod. Phys. A 26 (2011) 4731.
Cao:2012nj
F. G. Cao,
“Determination of the η-η^' mixing angle”,
https://doi.org/10.1103/PhysRevD.85.057501
Phys. Rev. D 85 (2012) 057501.
Gilman:1987ax
F. J. Gilman and R. Kauffman,
“The eta Eta-prime Mixing Angle”,
https://doi:10.1103/PhysRevD.37.3348
Phys. Rev. D 36 (1987) 2761.
Ball:1995zv
P. Ball, J. M. Frere and M. Tytgat,
“Phenomenological evidence for the gluon content of eta and eta-prime”,
https://doi:10.1016/0370-2693(95)01287-7
Phys. Lett. B 365 (1996) 367.
Feldmann:2002kz
T. Feldmann and P. Kroll,
“Mixing of pseudoscalar mesons”,
https://doi.org/10.1238/Physica.Topical.099a00013
Phys. Scripta T 99 (2002) 13.
Kroll:2002nt
P. Kroll and K. Passek-Kumericki,
“The Two gluon components of the eta and eta-prime mesons to leading twist accuracy”,
https://doi:10.1103/PhysRevD.67.054017
Phys. Rev. D 67 (2003) 054017.
Ambrosino:2009sc
F. Ambrosino, A. Antonelli, M. Antonelli, F. Archilli, P. Beltrame, G. Bencivenni, S. Bertolucci, C. Bini, C. Bloise and S. Bocchetta, et al.
“A Global fit to determine the pseudoscalar mixing angle and the gluonium content of the eta-prime meson”,
https://doi:10.1088/1126-6708/2009/07/105
JHEP 07 (2009) 105.
Ball:2007hb
P. Ball and G. W. Jones,
“B →η^(') Form Factors in QCD”,
https://doi.org/10.1088/1126-6708/2007/08/025
JHEP 0708 (2007) 025.
Duplancic:2015zna
G. Duplancic and B. Melic,
“Form factors of B, B_s →η^' and D, D_s→η^' transitions from QCD light-cone sum rules”,
https://doi.org/10.1007/JHEP11(2015)138
JHEP 1511 (2015) 138.
Feldmann:1999uf
T. Feldmann,
“Quark structure of pseudoscalar mesons,”
https://doi.org/10.1142/S0217751X00000082
Int. J. Mod. Phys. A 15 (2000) 159.
Feldmann:1998su
T. Feldmann,
“Mixing and decay constants of pseudoscalar mesons: Octet singlet versus quark flavor basis”,
https://doi.org/10.1016/S0920-5632(99)00152-8
Nucl. Phys. B Proc. Suppl. 74 (1999) 151.
Feldmann:1998vh
T. Feldmann, P. Kroll and B. Stech,
“Mixing and decay constants of pseudoscalar mesons”,
https://doi.org/ doi:10.1103/PhysRevD.58.114006
Phys. Rev. D 58 (1998) 114006.
CLEO:2007vpk
N. E. Adam et al. [CLEO Collaboration],
“A Study of Exclusive Charmless Semileptonic B Decay and |V_(ub)|”,
https://doi.org/10.1103/PhysRevLett.99.041802
Phys. Rev. Lett. 99 (2007) 041802.
BESIII:2013iro
M. Ablikim et al. [BESIII Collaboration],
“Precision measurements of B(D^+ →μ^+ ν_μ), the pseudoscalar decay constant f_D^+, and the quark mixing matrix element |V_ cd|”,
https://doi.org/10.1103/PhysRevD.89.051104
Phys. Rev. D 89 (2014) 051104.
BaBar:2014xzf
J. P. Lees et al. [BaBar Collaboration],
“ Measurement of the D^0 →π^- e^+ ν_e differential decay branching fraction as a function of q^2 and study of form factor parameterizations”,
https://doi.org/10.1103/PhysRevD.91.052022
Phys. Rev. D 91 (2015) 052022.
CLEO:2009svp
D. Besson et al. [CLEO Collaboration],
“Improved measurements of D meson semileptonic decays to π and K mesons”,
https://doi.org/10.1103/PhysRevD.80.032005
Phys. Rev. D 80 (2009) 032005.
HFLAV:2019otj
Y. S. Amhis et al. [HFLAV],
“Averages of b-hadron, c-hadron, and τ-lepton properties as of 2018”,
https://doi.org/10.1103/10.1140/epjc/s10052-020-8156-7
Eur. Phys. J. C 81 (2021) 226.
Lubicz:2017syv
V. Lubicz et al. [ETM Collaboration],
“Scalar and vector form factors of D →π(K) ℓν decays with N_f=2+1+1 twisted fermions”,
https://doi.org/10.1103/PhysRevD.96.054514
Phys. Rev. D 96 (2017) 054514.
BaBar:2011xxm
J. P. Lees et al. [BaBar Collaboration],
“Study of B̅→ X_u ℓν̅ decays in BB̅ events tagged by a fully reconstructed B-meson decay and determination of |V_ub|”,
https://doi.org/10.1103/PhysRevD.86.032004
Phys. Rev. D 86 (2012) 032004.
Belle:2005uxj
I. Bizjak et al. [Belle Collaboration],
“Determination of |V_ub| from measurements of the inclusive charmless semileptonic partial rates of B mesons using full reconstruction tags”,
https://doi.org/10.1103/PhysRevLett.95.241801
Phys. Rev. Lett. 95 (2005) 241801.
Gonzalez-Solis:2018ooo
S. Gonzàlez-Solís and P. Masjuan,
“Study of B→πℓν_ℓ and B^+→η^(')ℓ^+ν_ℓ decays and determination of |V_ub|”,
https://doi.org/10.1103/PhysRevD.98.034027
Phys. Rev. D 98 (2018) 034027.
Zyla:2022zbs
P. A. Zyla et al. [Particle Data Group],
“Review of Particle Physics”,
https://doi.org/10.1093/ptep/ptaa104
PTEP 2022 (2022) 083C01.
CLEO:2008xqh
R. E. Mitchell et al. [CLEO Collaboration],
“Observation of D^+ →η e^+ ν(e)”,
https://doi.org/ doi:10.1103/PhysRevLett.102.081801
Phys. Rev. Lett. 102 (2009), 081801.
CLEO:2010pjh
J. Yelton et al. [CLEO Collaboration],
“Studies of D^+ →η', η, ϕ e^+ ν_e”,
https://doi.org/10.1103/PhysRevD.84.032001
Phys. Rev. D 84 (2011), 032001.
BESIII:2018eom
M. Ablikim et al. [BESIII Collaboration],
“Study of the decays D^+→η^(') e^+ν_e”,
https://doi.org/110.1103/PhysRevD.97.092009
Phys. Rev. D 97 (2018) 092009.
Ablikim:2020hsc
M. Ablikim [BESIII Collaboration],
“First Observation of D^+ →ημ^+ν_μ and Measurement of Its Decay Dynamics”,
https://doi.org/10.1103/PhysRevLett.124.231801
Phys. Rev. Lett. 124 (2020) 231801.
BaBar:2008byi
B. Aubert et al. [BaBar Collaboration],
“Measurements of B →{π, η, η^'}ℓν_ℓ Branching Fractions and Determination of |V_ub| with Semileptonically Tagged B Mesons”,
https://doi.org/10.1103/PhysRevLett.101.081801
Phys. Rev. Lett. 101 (2008) 081801.
BaBar:2010npl
P. del Amo Sanchez et al. [BaBar Collaboration],
“Measurement of the B^0 →π^ℓℓ^+ ν and B^+ →η^(')ℓ^+ ν Branching Fractions, the B^0 →π^- ℓ^+ ν and B^+ →ηℓ^+ ν Form-Factor Shapes, and Determination of |V_ub|”,
https://doi.org/10.1103/PhysRevD.83.052011
Phys. Rev. D 83 (2011) 052011.
Belle:2017pzx
C. Beleño et al. [Belle Collaboration],
“Measurement of the decays B→ηℓν_ℓ and B→η'ℓν_ℓ in fully reconstructed events at Belle”,
https://doi.org/10.1103/PhysRevD.96.091102
Phys. Rev. D 96 (2017) 091102.
Belle:2021hah
U. Gebauer et al. [Belle Collaboration],
“Measurement of the branching fractions of the B^+ →ηℓ^+ ν_ℓ and B^+ →η^'ℓ^+ ν_ℓ decays with signal-side only reconstruction in the full q^2 range”,
https://doi.org/10.1103/PhysRevD.106.032013
Phys. Rev. D 106 (2022) 032013.
Cheng:2010yd
H. Y. Cheng and K. C. Yang,
“Charmless Hadronic B Decays into a Tensor Meson”,
https://doi.org/10.1103/PhysRevD.83.034001
Phys. Rev. D 83 (2011) 034001 (2011).
Braun:1988qv
V. M. Braun and I. E. Filyanov,
“QCD Sum Rules in Exclusive Kinematics and Pion Wave Function”,
https://doi.org/10.1007/BF01548594
Z. Phys. C 44 (1989) 157.
Balitsky:1989ry
I. I. Balitsky, V. M. Braun and A. V. Kolesnichenko,
“Radiative Decay Σ^+ → p γ in Quantum Chromodynamics”,
https://doi.org/10.1016/0550-3213(89)90570-1
Nucl. Phys. B 312 (1989) 509.
Chernyak:1990ag
V. L. Chernyak and I. R. Zhitnitsky,
“B meson exclusive decays into baryons”,
https://doi.org/10.1016/0550-3213(90)90612-H
Nucl. Phys. B 345 (1990) 137.
Ball:1991bs
P. Ball, V. M. Braun and H. G. Dosch,
“Form-factors of semileptonic D decays from QCD sum rules”,
https://doi.org/10.1103/PhysRevD.44.3567
Phys. Rev. D 44 (1991) 3567.
Brodsky:1981jv
S. J. Brodsky, T. Huang and G. P. Lepage,
“Hadronic wave functions and high momentum transfer interactions in quantum chromodynamics”,
Conf. Proc. C 810816 (1981) 143. SLAC-PUB-16520.
Lepage:1982gd
G. P. Lepage, S. J. Brodsky, T. Huang and P. B. Mackenzie,
“Hadronic Wave Functions in QCD”,
CLNS-82-522.
Huang:1986wm
T. Huang, X. N. Wang, X. D. Xiang and S. J. Brodsky,
“The Quark Mass and Spin Effects in the Mesonic Structure”,
https://doi.org/10.1103/PhysRevD.35.1013
Phys. Rev. D 35 (1987) 1013.
Huang:1989gv
T. Huang and Z. Huang,
“Quantum Chromodynamics in Background Fields”,
https://doi.org/10.1103/PhysRevD.39.1213
Phys. Rev. D 39 (1989) 1213.
Shifman:1978bx
M. A. Shifman, A. I. Vainshtein and V. I. Zakharov,
“QCD and Resonance Physics. Theoretical Foundations”,
https://doi.org/10.1016/0550-3213(79)90022-1
Nucl. Phys. B 147 (1979) 385.
Hubschmid:1982pa
W. Hubschmid and S. Mallik,
“Operator Expainsion At Short Distance In QCD”,
https://doi.org/10.1016/0550-3213(82)90134-1
Nucl. Phys. B 207 (1982) 29.
Govaerts:1984bk
J. Govaerts, F. de Viron, D. Gusbin and J. Weyers,
“QCD Sum Rules and Hybrid Mesons”,
https://doi.org/10.1016/0550-3213(84)90583-2
Nucl. Phys. B 248 (1984) 1.
Reinders:1984sr
L. J. Reinders, H. Rubinstein and S. Yazaki,
“Hadron Properties from QCD Sum Rules”,
https://doi.org/10.1016/0370-1573(85)90065-1
Phys. Rept. 127 (1985) 1.
Elias:1987ac
V. Elias, T. G. Steele and M. D. Scadron,
“q q̅ and Higher Dimensional Condensate Contributions to the Nonperturbative Quark Mass”,
https://doi.org/10.1103/PhysRevD.38.1584
Phys. Rev. D 38 (1988) 1584.
Zhang:2021wnv
Y. Zhang, T. Zhong, H. B. Fu, W. Cheng and X. G. Wu,
“Ds-meson leading-twist distribution amplitude within the QCD sum rules and its application to the B_s→ D_s transition form factor”,
https://doi.org/10.1103/PhysRevD.103.114024
Phys. Rev. D 103 (2021) 114024.
Zhong:2018exo
T. Zhong, Y. Zhang, X. G. Wu, H. B. Fu and T. Huang,
“The ratio ℛ(D) and the D-meson distribution amplitude”,
https://doi.org/10.1140/epjc/s10052-018-6387-7
Eur. Phys. J. C 78 (2018) 937.
Fu:2018vap
H. B. Fu, L. Zeng, W. Cheng, X. G. Wu and T. Zhong,
“Longitudinal leading-twist distribution amplitude of the J/ψ meson within the background field theory”,
https://doi.org/10.1103/PhysRevD.97.074025
Phys. Rev. D 97 (2018) 074025.
Zhang:2017rwz
Y. Zhang, T. Zhong, X. G. Wu, K. Li, H. B. Fu and T. Huang,
“Uncertainties of the B→ D transition form factor from the D-meson leading-twist distribution amplitude”,
https://doi.org/10.1140/epjc/s10052-018-5551-4
Eur. Phys. J. C 78 (2018) 76.
Fu:2016yzx
H. B. Fu, X. G. Wu, W. Cheng and T. Zhong,
“ρ -meson longitudinal leading-twist distribution amplitude within QCD background field theory”,
https://doi.org/10.1103/PhysRevD.94.074004
Phys. Rev. D 94 (2016) 074004.
Hu:2021zmy
D. D. Hu, H. B. Fu, T. Zhong, L. Zeng, W. Cheng and X. G. Wu,
“η-meson leading-twist distribution amplitude within QCD sum rule approach and its application to the semi-leptonic decay D_s^+ →ηℓ^+ν_ℓ”,
https://doi.org/10.1140/epjc/s10052-021-09958-0
Eur. Phys. J. C 82 (2022) 12.
Bali:2014pva
G. S. Bali, S. Collins, S. Dürr and I. Kanamori,
“D_s →η, η' semileptonic decay form factors with disconnected quark loop contributions”,
https://doi.org/10.1103/PhysRevD.91.014503
Phys. Rev. D 91 (2015) 014503.
Cheng:2020vwr
S. Cheng, A. Khodjamirian and A. V. Rusov,
“Pion light-cone distribution amplitude from the pion electromagnetic form factor”,
https://doi.org/ doi:10.1103/PhysRevD.102.074022
Phys. Rev. D 102 (2020) 074022.
Zhong:2021epq
T. Zhong, Z. H. Zhu, H. B. Fu, X. G. Wu and T. Huang,
“Improved light-cone harmonic oscillator model for the pionic leading-twist distribution amplitude”,
https://doi.org/10.1103/PhysRevD.104.016021
Phys. Rev. D 104 (2021) 016021.
Ball:2004ye
P. Ball and R. Zwicky,
“New results on B →π, K, η decay formfactors from light-cone sum rules”,
https://doi.org/10.1103/PhysRevD.71.014015
Phys. Rev. D 71 (2005) 014015.
DeFazio:2000my
F. De Fazio and M. R. Pennington,
“Radiative ϕ meson decays and η - η^' mixing: A QCD sum rule analysis”,
https://doi.org/10.1088/1126-6708/2000/07/051
JHEP 07 (2000) 051.
Ali:1998eb
A. Ali, G. Kramer and C. D. Lu,
“Experimental tests of factorization in charmless nonleptonic two-body B decays”,
https://doi.org/10.1103/PhysRevD.58.094009
Phys. Rev. D 58 (1998) 094009.
Dhiman:2019qaa
N. Dhiman, H. Dahiya, C. R. Ji and H. M. Choi,
“Study of twist-2 distribution amplitudes and the decay constants of pseudoscalar and vector heavy mesons in light-front quark model”,
https://doi.org/10.22323/1.374.0038
PoS LC2019 (2019) 038.
Hwang:2010hw
C. W. Hwang,
“Analyses of decay constants and light-cone distribution amplitudes for S-wave heavy meson”,
https://doi.org/10.22323/1.374.0038
Phys. Rev. D 81 (2010) 114024.
Geng:2016pyr
C. Q. Geng, C. C. Lih and C. Xia,
“Some heavy vector and tensor meson decay constants in light-front quark model”,
https://doi.org/10.1140/epjc/s10052-016-4172-z
Eur. Phys. J. C 76 (2016) 313.
Choi:2007se
H. M. Choi,
“Decay constants and radiative decays of heavy mesons in light-front quark model”,
https://doi.org/10.1103/PhysRevD.75.073016
Phys. Rev. D 75 (2007) 073016.
Dercks:2017lfq
D. Dercks, H. Dreiner, M. E. Krauss, T. Opferkuch and A. Reinert,
“R-Parity Violation at the LHC”,
https://doi.org/10.1140/epjc/s10052-017-5414-4
Eur. Phys. J. C 77 (2017) 856.
Becirevic:1998ua
D. Becirevic, P. Boucaud, J. P. Leroy, V. Lubicz, G. Martinelli, F. Mescia and F. Rapuano,
“Nonperturbatively improved heavy - light mesons: Masses and decay constants”,
https://doi.org/10.1103/PhysRevD.60.074501
Phys. Rev. D 60 (1999) 074501.
FermilabLattice:2014tsy
A. Bazavov et al. [Fermilab Lattice and MILC],
“Charmed and Light Pseudoscalar Meson Decay Constants from Four-Flavor Lattice QCD with Physical Light Quarks”,
https://doi.org/10.22323/1.374.0038
Phys. Rev. D 90 (2014) 074509.
Bali:2021qem
G. S. Bali et al. [RQCD Collaboration],
“asses and decay constants of the η and η' mesons from lattice QCD”,
https://doi.org/doi:10.1007/JHEP08(2021)137
JHEP 08 (2021) 137.
Cvetic:2004qg
G. Cvetic, C. S. Kim, G. L. Wang and W. Namgung,
“Decay constants of heavy meson of 0-state in relativistic Salpeter method”,
https://doi.org/10.1016/j.physletb.2004.06.092
Phys. Lett. B 596 (2004) 84.
Wang:2005qx
G. L. Wang,
“Decay constants of heavy vector mesons in relativistic Bethe-Salpeter method”,
https://doi.org/10.1016/j.physletb.2005.12.005
Phys. Lett. B 633 (2006) 492.
Bhatnagar:2009jg
S. Bhatnagar, S. Y. Li and J. Mahecha,
“power counting of various Dirac covariants in hadronic Bethe-Salpeter wavefunctions for decay constant calculations of pseudoscalar mesons”,
https://doi.org/10.1142/S0218301311018460
Int. J. Mod. Phys. E 20 (2011) 1437.
Hwang:1996ha
D. S. Hwang and G. H. Kim,
“Decay constants of B, B^* and D, D^* mesons in relativistic mock meson model”,
https://doi.org/10.1103/PhysRevD.55.6944
Phys. Rev. D 55 (1997) 6944.
Capstick:1989ra
S. Capstick and S. Godfrey,
“Pseudoscalar Decay Constants in the Relativized Quark Model and Measuring the CKM Matrix Elements”,
https://doi.org/10.1103/PhysRevD.41.2856
Phys. Rev. D 41 (1990) 2856.
Ebert:2006hj
D. Ebert, R. N. Faustov and V. O. Galkin,
“Relativistic treatment of the decay constants of light and heavy mesons”,
https://doi.org/10.1016/j.physletb.2006.02.042
Phys. Lett. B 635 (2006) 93.
Yazarloo:2016luc
B. H. Yazarloo and H. Mehraban,
“Study of B and B_s mesons with a Coulomb plus exponential type potential”,
https://doi.org/10.1209/0295-5075/116/31004
EPL 116 (2016) 31004.
Guo:1991eb
X. H. Guo and T. Huang,
“Hadronic wavefunctions in D and B decays”,
https://doi.org/ doi:10.1103/PhysRevD.43.2931
Phys. Rev. D 43 (1991) 2931.
Huang:1994dy
T. Huang, B. Q. Ma and Q. X. Shen,
“Analysis of the pion wavefunction in light cone formalism”,
https://doi.org/ doi:10.1103/PhysRevD.49.1490
Phys. Rev. D 49 (1994) 1490.
Jaus:1991cy
W. Jaus,
“Relativistic constituent quark model of electroweak properties of light mesons”,
https://doi.org/10.1103/PhysRevD.44.2851
Phys. Rev. D 44 (1991) 2851.
Choi:1996mq
H. M. Choi and C. R. Ji,
“Light cone quark model predictions for radiative meson decays”,
https://doi.org/10.1016/S0375-9474(97)00052-3
Nucl. Phys. A 618 (1997) 291.
Ji:1992yf
C. R. Ji, P. L. Chung and S. R. Cotanch,
“Light cone quark model axial vector meson wave function”,
https://doi.org/10.1103/PhysRevD.45.4214
Phys. Rev. D 45 (1992) 4214.
Wu:2008yr
X. G. Wu and T. Huang,
“Kaon Electromagnetic Form-Factor within the k(T) Factorization Formalism and It's Light-Cone Wave Function”,
https://doi.org/10.1088/1126-6708/2008/04/043
JHEP 04 (2008) 043 (2008).
Wu:2011gf
X. G. Wu and T. Huang,
“Constraints on the Light Pseudoscalar Meson Distribution Amplitudes from Their Meson-Photon Transition Form Factors”,
https://doi.org/10.1103/PhysRevD.84.074011
Phys. Rev. D 84 (2011) 074011 (2011).
Huang:2013yya
T. Huang, T. Zhong and X. G. Wu,
“Determination of the pion distribution amplitude”,
https://doi.org/10.1103/PhysRevD.88.034013
Phys. Rev. D 88, 034013 (2013).
Wu:2012kw
X. G. Wu, T. Huang and T. Zhong,
“Information on the Pion Distribution Amplitude from the Pion-Photon Transition Form Factor with the Belle and BaBar Data”,
https://doi.org/10.1088/1674-1137/37/6/063105
Chin. Phys. C 37, 063105 (2013).
Duplancic:2008ix
G. Duplancic, A. Khodjamirian, T. Mannel, B. Melic and N. Offen,
“Light-cone sum rules for B→π form factors revisited”,
https://doi.org/10.1088/1126-6708/2008/04/014
JHEP 04 (2008) 014.
Fu:2013wqa
H. B. Fu, X. G. Wu, H. Y. Han, Y. Ma and T. Zhong,
“|V_cb| from the semileptonic decay B→ D ℓν̅_ℓ and the properties of the D meson distribution amplitude”,
https://doi.org/doi:10.1016/j.nuclphysb.2014.04.021
Nucl. Phys. B 884 (2014) 172.
Colangelo:2000dp
P. Colangelo and A. Khodjamirian,
“QCD sum rules, a modern perspective,”
https://doi.org/10.1142/9789812810458_0033
arXiv:hep-ph/0010175 [hep-ph].
Narison:2014wqa
S. Narison,
“Mini-review on QCD spectral sum rules”,
https://doi.org/10.1016/j.nuclphysbps.2015.01.041
Nucl. Part. Phys. Proc. 258 (2015) 189.
Narison:2014ska
S. Narison,
“Improved f_D*_(s), f_B*_(s) and f_B_c from QCD Laplace sum rules,”
https://doi.org/10.1142/S0217751X1550116X
Int. J. Mod. Phys. A 30 (2015) 1550116.
Metcalf:1979iw
W. J. Metcalf, I. J. R. Aitchison, J. LeBritton, D. McCal, A. C. Melissinos, A. P. Contogouris, S. Papadopoulos, J. Alspector, S. Borenstein and G. R. Kalbfleisch, et al.
“The Magnitude of Parton Intrinsic Transverse Momentum”,
https://doi.org/10.1016/0370-2693(80)90449-9
Phys. Lett. B 91 (1980) 275.
Lepage:1980fj
G. P. Lepage and S. J. Brodsky,
“Exclusive Processes in Perturbative Quantum Chromodynamics,”
https://doi.org/10.1103/PhysRevD.22.2157
Phys. Rev. D 22, 2157 (1980).
Chernyak:1981zz
V. L. Chernyak and A. R. Zhitnitsky,
“Exclusive Decays of Heavy Mesons,”
https://doi.org/10.1016/0550-3213(83)90251-1
Nucl. Phys. B 201, 492 (1982).
Wang:2014vra
Z. G. Wang,
“B-S transition form-factors with the light-cone QCD sum rules”,
https://doi.org/ doi:10.1103/PhysRevD.78.059901
Eur. Phys. J. C 75 (2015) 50.
Offen:2013nma
N. Offen, F. A. Porkert and A. Schäfer,
“Light-cone sum rules for the D_s→η^(')ℓν_ℓ form factor”,
https://doi.org/10.1103/PhysRevD.88.034023
Phys. Rev. D 88 (2013) 034023.
Charng:2006zj
Y. Y. Charng, T. Kurimoto and H. n. Li,
“Gluonic contribution to B →η^(') form factors”,
https://doi.org/ doi:10.1103/PhysRevD.78.059901
Phys. Rev. D 74 (2006) 074024.
Chen:2009qk
C. H. Chen, Y. L. Shen and W. Wang,
“|V(ub)| and B →η^(') Form Factors in Covariant Light Front Approach”,
https://doi.org/ 10.1016/j.physletb.2010.02.056
Phys. Lett. B 686 (2010) 118.
Verma:2011yw
R. C. Verma,
“Decay constants and form factors of s-wave and p-wave mesons in the covariant light-front quark model”,
https://doi.org/10.1088/0954-3899/39/2/025005
J. Phys. G 39 (2012) 025005.
Ivanov:2019nqd
M. A. Ivanov, J. G. Körner, J. N. Pandya, P. Santorelli, N. R. Soni and C. T. Tran,
“Exclusive semileptonic decays of D and D_s mesons in the covariant confining quark model”,
https://doi.org/10.1007/s11467-019-0908-1
Front. Phys. (Beijing) 14 (2019) 64401.
Bourrely:2008za
C. Bourrely, I. Caprini and L. Lellouch,
“Model-independent description of B →πℓν decays and a determination of |V_(ub)|”,
https://doi.org/doi:10.1103/PhysRevD.82.099902
Phys. Rev. D 79 (2009) 013008.
Bharucha:2010im
A. Bharucha, T. Feldmann and M. Wick,
“Theoretical and Phenomenological Constraints on Form Factors for Radiative and Semi-Leptonic B-Meson Decays”,
https://doi.org/doi:10.1007/JHEP09(2010)090
JHEP 09 (2010) 090.
FermilabLattice:2015mwy
J. A. Bailey et al. [Fermilab Lattice and MILC],
“|V_ub| from B→πℓν decays and (2+1)-flavor lattice QCD”,
https://doi.org/doi:10.1103/PhysRevD.92.014024
Phys. Rev. D 92 (2015) 014024.
Belle-II:2018jsg
E. Kou et al. [Belle-II],
“The Belle II Physics Book,”
https://doi.org/doi:10.1093/ptep/ptz106
PTEP 2019, 123C01 (2019).
Achasov:2023gey
M. Achasov, X. C. Ai, R. Aliberti, Q. An, X. Z. Bai, Y. Bai, O. Bakina, A. Barnyakov, V. Blinov and V. Bobrovnikov, et al.
“STCF Conceptual Design Report: Volume I - Physics & Detector,”
https://arxiv.org/abs/2303.15790
arXiv:2303.15790 [hep-ex].
|
http://arxiv.org/abs/2307.06857v1 | 20230711175148 | Self-consistency for open-ended generations | [
"Siddhartha Jain",
"Xiaofei Ma",
"Anoop Deoras",
"Bing Xiang"
] | cs.AI | [
"cs.AI",
"cs.CL",
"cs.LG"
] |
Is Kaniadakis κ-generalized statistical mechanics general?
G. A. Alves
==========================================================
In this paper, we present a novel approach for improving the quality and consistency of generated outputs from large-scale pre-trained language models (LLMs). Self-consistency has emerged as an effective approach for prompts with fixed answers, selecting the answer with the highest number of votes. In this paper, we introduce a generalized framework for self-consistency that extends its applicability beyond problems that have fixed-answer answers. Through extensive simulations, we demonstrate that our approach consistently recovers the optimal or near-optimal generation from a set of candidates. We also propose lightweight parameter-free similarity functions that show significant and consistent improvements across code generation, autoformalization, and summarization tasks, even without access to token log probabilities. Our method incurs minimal computational overhead, requiring no auxiliary reranker models or modifications to the existing model.
§ INTRODUCTION
The rapid advancement and remarkable achievements of large-scale pre-trained language models (LLMs) have brought about a revolutionary transformation in the field of natural language processing (NLP). These models have demonstrated significant enhancements in various NLP applications, such as machine translation, summarization, and code generation. However, it is important to note that the quality of generated outputs can exhibit considerable variability. Although individual generations sampled from the models often yield high-quality results, multiple samplings can produce certain generations of substantially higher quality than the average output of the model.
Several approaches can be employed to address this issue. One strategy involves improving the underlying models themselves. . This can be achieved by enriching the training dataset with higher quality generations, as suggested by recent studies <cit.>. Another approach is to maintain the integrity of the underlying model while employing a process known as reranking to prioritize and select highly ranked generations based on specific criteria <cit.>. However, it is worth noting that most reranking techniques involve computationally intensive or cumbersome methods to calculate the selection criterion, which can include training an auxiliary model as a reranker or evaluating the probability of the query given the generated answer – the latter of which doubles the inference cost. In case of code generation models, it can also involve executing the generated code on unit tests which can get quite complex and may not be feasible for a lot of use cases, especially as you move beyond the contest coding setting.
Recently for the special case of problems that have fixed answer, a simple approach, called self-consistency was suggested for selecting the best answer from multiple genrations <cit.>. In that paper, the authors sample multiple generations from the LLM, extract the predicted answer from each generation and select the answer with the most number of votes. In their paper, the authors propose a method wherein they sample multiple generations from the LLM and extract the predicted answer from each generation. Subsequently, they select the answer with the highest number of votes, thus achieving substantial improvements over existing baselines, including the widely used approach of ranking generations based on the log probability. However, it is important to note that the self-consistency approach is not applicable to prompts that are open-ended and do not have fixed answers. This limitation becomes particularly relevant in scenarios such as code generation, where multiple implementations of the same function may be valid, or in open-ended text generation, where multiple phrasings can convey the same meaning. The self-consistency method relies on the presence of a clear majority or consensus among the generated answers, which may not exist in these open-ended situations. Consequently, alternative strategies need to be explored to address the challenges posed by such prompt types.
An alternative perspective on self-consistency can be achieved by considering a similarity function that compares different generations and reranks them based on their average similarity to other generations. In the case of self-consistency, the similarity function takes a binary form, indicating whether the answers of two generations are identical. However, this binary similarity function is limited to prompts that have fixed answers. In this work, we develop a framework that formally defines the concept of an optimal generation. We demonstrate that the aforementioned viewpoint on self-consistency allows us to identify the optimal generation within a given set. In simulated scenarios, we provide evidence that our framework is capable of recovering the best or near-best generation in many cases. Additionally, we develop lightweight similarity functions that are suitable for open-ended generation tasks, thereby expanding the applicability of self-consistency to such domains. Furthermore, we demonstrate that the reranking methods utilized in previous works <cit.> can also be understood within the same conceptual framework.
Concretely, our contributions are as follows
* We propose a generalized framework for self-consistency that extends its applicability beyond prompts with fixed answers. By formally defining the concept of an optimal generation for open-ended generations, we are able to show that our framework is capable of recovering the optimal generation if it exists within the set of generations. Through extensive simulations, we show that our approach consistently identifies the best or near-best generation from a set of candidate generations.
* We introduce multiple lightweight similarity functions that require no additional parameters. These functions are evaluated across various tasks, including code generation, autoformalization, and summarization, using six different models. Our evaluations reveal consistent and significant improvements over baseline methods. Notably, one of our similarity functions only relies on the raw generations from the model, making it particularly relevant in situations where token log probabilities are not accessible, as is the case with certain proprietary models like OpenAI's chat models' API.
* Leveraging the pairwise similarity nature of our reranking scheme, we enhance model generation for code generation tasks, particularly when the evaluation metric is pass@k for k>1. This enables us to improve the overall quality of generated code.
* To gain insights into the effectiveness of our similarity function, we conduct various ablation experiments. These experiments allow us to analyze and understand the underlying mechanisms that contribute to the success of our approach.
The rest of the paper is organized as follows. In Section <ref> we present our motivation. In Section <ref> we present our method and the similarity function. In Section <ref>, we present and discuss our experimental results. In Section <ref>, we describe the related work and we finally conclude in Section <ref>.
§ MOTIVATION
What constitutes a good generation? In the context of code generation, for instance, one metric might be the number of passed unit tests, which provides an approximation of correctness. Additional criteria could be the readability or the computational optimality of the code, as judged by human evaluators. For language tasks, we could similarly establish a set of predicates to evaluate output quality, such as fluency, avoidance of content hallucination, or correctness of responses to questions with fixed answers.
To formalize this, imagine a vector 𝐯 of length k, where each element represents a categorical variable. We also have n people who each hold a personal estimate 𝐮_i of 𝐯. The only accessible information we have is the pairwise fractional agreement, denoted as a(𝐮_i, 𝐮_𝐣) = 1/k∑_t=1^k 𝕀(𝐮^t_i = 𝐮^t_j) ∀ i, j ∈ [1, n] where i indexes the generations and t the predicates. Our aim is to identify a person i such that a(𝐮_i, 𝐯) is maximized. In this formulation, the elements of 𝐯 correspond to the different predicates we wish the generation to satisfy, while the people correspond to different generations.
We only assume access to the fractional agreement rather than to the underlying estimates of a(𝐮, 𝐯) as that we may not have the resources or knowledge to evaluate any or all of the predicates on the different generations at inference time. For example in context of code, we may not have the ability to execute unit tests – either due to latency or security constraints, or lack of an appropriate build system, or unavailability of unit tests. For predicates like content hallucination, might be required, which is not generally feasible at inference time. Consequently, we must resort to approximating the agreement using proxy similarity functions.
We introduce a self-consistency assumption stating that, for each individual predicate, the most frequent response is assumed to be correct. Formally if 𝐯^l can take on m_l values 1,…,m_l and without loss of generality, 𝐯^l = 1, then 1 = max_j ∑_i=1^n 𝕀(u^l_i = j).
Given this problem formulation and selection criterion, we can establish the following:
For k = 1, we always recover the best 𝐮. However for k > 1, it is not guaranteed.
Moreover:
If there exists 𝐮_b = v, then b = max_i 1/n-1∑_i ≠ j a(𝐮_i, 𝐮_𝐣).
Informally this says that if a generation g exists such that its predicate vector perfectly aligns with the target vector, selecting the generation with the highest average fractional agreement with other generations will pick g.
Now if we assume that 𝐮^j_i are iid from Bernoulli(p_j), then we can show that
∑_j=1^k p_i - √(klogk/2)≤𝔼[∑_j^k 𝐮^j_b] ≤∑_j=1^k p_i + √(klogk/2)
where 𝐮_b denotes the sequence selected by our method.
All proofs for these theorems are presented in the Supplement. To further substantiate our selection criterion—picking the generation with the highest average fractional agreement with all other generations—we conducted a simulation. Here, we randomly generated 𝐯 and 𝐮 and evaluated the optimality of our criterion. Our findings suggest that for various k values, our method successfully recovers the best generation the majority of the time, significantly outperforming random selection. Moreover, on average, the generation we recover demonstrates nearly 100% agreement with best generation, even in cases where we do not select the best generation. The full details are in the Supplement.
§ METHOD
As previously mentioned, we may not have the capability to compute predicates at inference time, thereby rendering the computation of the exact fractional agreement with 𝐯 i.e. a(𝐮, 𝐯), unattainable. As a result, we need to resort to a surrogate similarity function. To this end, we define a generalized self-consistency score GSC_Sim(i) for each generation i, given by 1/M-1∑_j=1, j≠ i^M Sim(i, j). Here, Sim denotes the similarity function, and M represents the number of generations.
For generations with fixed answers, fi we have
Sim(i, j) = 𝕀( i j)
this is equivalent to the self-consistency criterion. Two other reranking methods - MBR-Exec <cit.> and AlphaCode <cit.> - can be viewed in terms of the same formulation with the difference being that of the similarity function. MBR-Exec executes model generated code. It then defines gives a similarity score of 1 if a pair of programs agree on all unit tests and 0 otherwise[They define it in terms of a loss function but taking 1-the loss function is equivalent to the similarity function we describe above]. For each program, they sum the similarity vs all other programs and pick the program with the highest similarity. Similarly AlphaCode clusters its generated programs by executing them on test cases and selecting a program from the largest cluster – with two programs Centroid together if they agree on on all test cases. This is conceptually equivalent to what MBR-Exec does. We give further evidence that this is a useful way to frame self-consistency by evaluating another OpenAI Ada embedding based similarity function (Section <ref> in the Supplement). While its performance is promising, as the similarity function is a lot more heavyweight requiring a separate embedding model, we chose not to explore it further.
One straightforward way to encode a generation is by using a binary vector that denotes the presence or absence of an n-gram. Surprisingly, we find this simple encoding to be sufficient for defining a robust similarity function. For open-ended generation, we define our similarity function as follows. For each generation we define a vector 𝐯 of size |V| where V is set of all possible n-grams for n=1 to n=K where K is a hyperparameter. For the experiments in this paper, we simply use K=1. We show in Section <ref>, increasing K can be helpful though only up to a point. Each element i of 𝐯 is simply whether token i is present in the generation or not. We then take the inner product between two such vectors as similarity. We call this the Ngram consistency score (NCS) and refer to the K=1 version as the Unigram consistency score (UCS). Formally
UCS(i, j) = 1/|V|𝐯_i·𝐯_j
where
𝐯^j_i = 𝕀(t_j ∈ g_i)
where t_j is the jth token and g_i the ith generation. This definition only requires model generations and incurs minimal computational overhead – we only need to compute the unigram overlap instead of training an auxiliary model, running generated programs, or performing additional inferences using the same model (which will increase compute cost as well as latency). Notably, we don't normalize the inner product by the norm of the vectors. This is a deliberate design choice that encourages more diverse sequences, in response to known issues of neural generation models producing degenerate and repetitive sequences <cit.>. We delve into this topic in Section <ref> in the Supplement.
When token probabilities are available, we can leverage them to improve our approach. Intuitively, if a generation has a low token probability for the generated token, then finding a match for that that token should count for less. In accordance with this intuition, we introduce two further variants. First we modify the definition of 𝐯 as follows
𝐯^j_i =
1/c_j^i∑_k^c_j^i p(t_j^i,k) if t_j ∈ g_i,
0 otherwise
where c^j_i is the number of times token t_j appears in generation i and p(t_j^i,k) is the token probability of the jth token's kth appearance in generation i. We call this the weighted n-gram consistency score (WUCS).
The mean log probability of a sequence is an oft-used ranking method. We can combine it with WUCS by further weighting each generation by the per token probability as follows – for a generation i, Consensus-WUCS = WUCS· e^(1/|g_i|)· p(g_i) where g_i is the length of generation i.
Finally, to rank the generations, we employ max_i GSC_Sim(i) where Sim can take the form of UCS, WUCS, or Consensus-UCS.
§.§ Extending to ranked pass@k
A common evaluation metric for code generation problems is ranked pass@k wherein we assess whether any program among the top k selected programs (selected from a larger set) can pass all the given unit tests for that problem. Typically, the top k generations are selected based on a predetermined ranking. However, with our similarity-based metric, we can apply a more nuanced approach.
For a particular problem, if the highest-ranked generation for a specific prompt is correct, we have already succeeded. We would only need to utilize the remaining generations in our k-budget if the top-ranked generation does not pass some unit test case. In this event, we could consider the top-ranked generation as a hard negative and select the next generation that exhibits lower similarity to the top-ranked generation.
More specifically, if we have selected programs S_k' so far (|S_k'| = k' < k, then we modify the GCS function to select the k'+1th item in the list. In particular, we compute
GCS^ranked_Sim = 1/M-1(∑_j ∉ S_k' Sim(i, j) - ∑_j ∈ S_k' Sim(i, j))
Note that for k=1, GCS and GCS^ranked are equivalent. We demonstrate in Section <ref> that GCS^ranked_Sim performs significantly better in ranking for pass@k where k > 1 than raw GCS. This approach leads to a more efficient utilization of the ranked generations, improving the overall effectiveness of the code generation task.
§ RESULTS
We conduct experiments utilizing the Codex family of models, specifically Codex-davinci-001, Codex-davinci-002, and Codex-Cushman as well as Llama family of models. In addition we also evaluate GPT-J for Xsum and MiniF2F. We evaluate these models on a range of datasets for code generation tasks – in particular on the HumanEval <cit.>, MBPP, MBPP-sanitized <cit.> datasets for code generation. For the autoformalization of MiniF2F to Isabelle, we use the dataset provided by <cit.>. For text summarization, we utilize the Xsum dataset <cit.>.
Our primary evaluation metric for code generation is ranked pass@1 where we rerank a sample set of generations and assess whether the top-ranked generation successfully passes all unit tests. We also evaluate with ranked pass@k for k>1. For the MiniF2F autoformalization task, we measure the quality using the BLEU score, following <cit.>. For Xsum we use the Rouge-2 and Rouge-L scores for evaluation. For all code generation datasets, we sample 125 generations from the models which serves as our dataset for the different experiments
For MiniF2F and Xsum, we sample 50 generations from the model. Unless otherwise specified, for all experiments, we use the Codex-davinci-002 model. Following <cit.>, we perform bootstrap sampling 50 times with a sample size of 25 to generate the results.
Our baselines are Random selection, Ranking by mean log probability, Ranking using Centroid in our confidence weighted unigram space, and for code generation - ranking using the Coder Reviewer Ranker method <cit.>. A full description of the datasets, experiments, and the baselines is in the Supplement.
§.§ UCS scores are higher for correct answers
As a sanity check, we first evaluate whether the UCS scores are indeed higher for the correct generations [We used the generations in <cit.> provided by them as part of their Supplementary Material.] The results are in Table <ref> in the Supplement. The ratios are consistently >1 for all models except for the UL2-20B model for which they still remain very close to 1.
§.§ UCS shows strong improvements for Code Generation
As showcased in Tables <ref> and <ref>, the application of the UCS, WUCS, and Consensus-WUCS methods leads to substantial improvements in the accuracy as well as mean reciprocal rank of code generation across various models and datasets.
In the HumanEval dataset, UCS variants consistently outperform the traditional methods, namely Random and mean log probability. For instance, the Codex002 model exhibits a substantial accuracy improvement from 0.435 (Random) to 0.568 (Consensus-WUCS). Even the less performing models, such as Llama-13B and Llama-30B, exhibit noticeable accuracy gains when our proposed methods are employed.
Similar trends are observed in the MBPP-S and MBPP datasets. UCS, WUCS, and Consensus-WUCS consistently improve the accuracy across all models. Specifically, the Consensus-WUCS method consistently dominates Random and mean log probability ranking in all categories, and almost always outperforms WUCS as well. Of particular note is the performance of WUCS, which surpasses the mean log probability method in every model and dataset combination. In fact it is the best method for all dataset and model combinations except LLama-13B model for MBBP and MBPP-S. UCS, which does not require token probabilities and relies only on the generations, also demonstrates a consistent superiority over the random reranking.
Consensens-WUCS and WUCS are also almost always better than the Centroid based approach with Consensus-WUCS outperforming it 13/15 times. A discussion of the mean reciprocal ranking performance is deferred to the Supplement but the trend is similar.
§.§ UCS shows consistent improvements for open-ended generation
The performance of UCS, WUCS, and Consensus-WUCS on the Xsum and MiniF2F datasets demonstrates the versatility and efficacy of our proposed methods across varied tasks and models. The results, shown in Table <ref>, paint a promising picture for these reranking techniques.
In the case of the MiniF2F dataset, evaluated using the BLEU metric, Consensus-WUCS outperforms all other methods for the Codex002 model except for Centroid. For the Llama-13B, Llama-30B, and GPT-J models, the top performers are closely matched, with Consensus-WUCS, WUCS, and UCS all delivering competitive scores.
Turning to the Xsum dataset, we see a similar trend. For the Rouge-2 metric, Consensus-WUCS achieves the highest score for the Codex002 and both LLama models, and ties for the best score with WUCS for the Llama-13B model. In the GPT-J model, UCS performs slightly better than the WUCS and Consensus-WUCS. Nonetheless, all these methods surpass Random, and Mean-logp reranking methods and almost always surpass Centroid.
With the Rouge-L metric, UCS variants show the best performance for the all models except Codex002. For the Llama-30B model, WUCS and Consensus-WUCS share the top spot, while UCS achieves the best score for the GPT-J model. Once again, these methods generally outperform Centroid, Random, and Mean-logp reranking methods.
In total, Consensus-WUCS gets the top spot in 5/12 comparisons, WUCS in 4/12, UCS in 2/12, and Centroid in 5/12 primarily due to MiniF2F.
§.§ UCS variants are competitive with Code Reviewer Ranker
The comparison with the Code Reviewer Ranker baseline, specifically with the Normalized Reviewer (NR) and Normalized Coder-Reviewer (NCR) variants, is in Table <ref> (Supplement). As the state of the art in code reranking, these methods represent a strong baseline.
Our results demonstrate that the WUCS and Consensus-WUCS methods are highly competitive, often outperforming both NR and NCR, despite the fact that NR and NCR require a second forward pass, which doubles the inference cost and adds latency overhead. A fuller discussion of the results is in the Supplement.
§.§ Improvements are consistent across different generation temperatures
In Figure <ref> (Supplement) we show how UCS reranking behaves for MBPP as the decoding sampling temperature increases. While accuracy can vary across temperatures, the ranking of the different methods remains consistent. Consensus-WUCS dominates in terms of accuracy for most of the temperature regimes until you hit the temperature of 1. Importantly, for lower temperatures where we get the best results, Both Consensus-WUCS as well as WUCS get the best accuracy. While just UCS is on par with mean log-probability ranking until a temperature of 0.4 after which it falls behind, we note that UCS does not use any probability information about the generation and thus a fair comparison would be to that of random ranking which it is consistency better than for almost the entire temperature range.
§.§ Varying the maximum n-gram length does not change results
As mentioned in Section <ref>, UCS only considers unigrams. Here we consider Ngram Consistency Score – the more generalized version. To account for the fact that a sentence will have fewer n-grams, the more n increases, we multiply p(t_j^i,k) by |g_i|/|g_i|-|t_j^i,k|-1 where t_j^i,k is now the kth appearance of the jth n-gram in the ith generation. In Figure <ref> (Supplement), we show how the ranking behaves as the n increases. As can be seen, while there is a slight improvement going from n=1 to n=4, the improvement flattens after that point. 4-grams is also what is conventionally used when computing BLEU score so it is interesting that the same value ends up being optimal in the drastically different setting of code generation with each word being a token instead of an English word.
§.§ Increasing number of samples maintains reranking strength
In Figure <ref> (Supplement), we show how the performance changes for MBPP and Xsum as the number of samples increases. All variants of UCS are able to maintain accuracy (although Consensus-WUCS sees a drop in the beginning for Xsum but maintains its performance subsequently) even as the number of samples increases from 5 to 100. Meanwhile, the mean log probability ranking drastically declines in terms of accuracy, quickly falling below even random selection. This is likely due to the tendency of mean log probability ranking to choose degenerate sequences <cit.> which UCS variants seem to be able to avoid.
§.§ GCS^ranked comparison
In Figure <ref>, we show how the model performance changes as k for pass@k increases. We compare GCS vs GCS^ranked. While the performance of GCS declines quickly, GCS^ranked maintains good performance even at larger values of k for all code generation datasets.
§ RELATED WORK
§.§ Advanced decoding
There are also several advanced decoding methods to improve model generation quality. In <cit.>, the log probabilities of tokens are adjusted depending on their similarity to previously chosen words. However this method requires generations to be done jointly with coordination. This can cause significant infrastructure and latency overhead if the model itself is distributed across multiple GPUs or machines which is far from unusual for LLMs. In <cit.> they use an auxiliary LLM model to contrast the token probabilities against the primary LLM and use them for decoding. In <cit.>, they have a degeneration penalty to penalize generation of already generated tokens. A significant caveat of all such methods is that they also require access to the decoding procedure of the LLM. You cannot just take the generations from an LLM API. Our approach also bears some similarity to <cit.> where they compute a consensus hypothesis by doing multiple alignment of sentence lattices.
§.§ Auxiliary reranker
In <cit.>, they use a perceptron based reranker to rerank model generated translations. SummaReranker <cit.> use mixture of experts training to train their reranker to optimize for multiple automated evaluation metrics (like ROUGE or BLEU score) at once. PairReranker <cit.> uses automated evaluation metrics to rank model generations and then select the top few best and worse and train a model to classify the better summary between a pair of summaries. All of the previous reranking methods however require training an auxiliary model.
§.§ Code generation reranking
There have also been multiple reranking proposals for code generation in particular. A unique characteristic of code (as oppposed to text) is that code can be executed. Thus several methods have tried to exploit that property for reranking. MBR-Exec <cit.> and AlphaCode <cit.> both execute the generated codes on unit tests. They rank the different codes according to how many other codes are semantically equivalent to them (i.e. have the same results on the given unit tests). CodeT <cit.> uses LLMs to generate both code and candidate unit tests. They then find sets of generated codes such that the product of the size of the set and the size of the unit test set the codes agree on is maximized. More recently, Coder-Reviewer Ranker <cit.> applies the well known Maximum Mutual Information objective <cit.> to code generating LLMs by using the strong few shot and zero prompting capabilities of LLMs to obtain the query likelihood.
§ CONCLUSION AND FUTURE WORK
We analyze the self-consistency method for problems that have fixed answers and develop a framework to extend it to open-ended generations. We establish connections between our framework and other code generation reranking functions and prove that if the optimal generation is present in our generation set, we can always recover it as well as prove bounds on how close we can get to the optimal generation under certain settings.
Our simulated tests reveal our ability to consistently recover the best or close to best possible generation in the set. We introduce several lightweight similarity functions and show that they give strong and consistent improvements over state of the art baselines. Notably, our Unigram Consistency Score (UCS) function, the most minimal of our similarity functions, requires only access to raw generations to effectively rerank. We show that the UCS variants uniformly enhance the performance of code and text generation and are competitive with strong baselines like Coder Reviewer Reranker despite them needing a lot more compute resources as well as time. For code geneartion, we also leverage the fact that our reranking metric is based on pairwise similarity to improve performance for pass@k for k > 1. Additionally, we conduct multiple variations on our primary experiments to ascertain the robustness and reliability of our performance.
§ BROADER IMPACT AND LIMITATIONS
As a paper that tries to improve the performance of Large Language Models (LLMs), it inherits the risk and rewards of LLMs in general. LLMs have shown themselves highly relevant and useful for a number of tasks but in particular code generation. Our method shows particularly strong improvements for that task and thus we hope will have a broad impact. Nevertheless, we did not evaluate our method on whether it increases its propensity to select biased or toxic generations which we leave to future work.
§ ACKNOWLEDGEMENTS
We would like to thank Danica Sutherland and Ameya Velingker for useful discussions regarding the project.
§ SUPPLEMENTARY MATERIAL
§.§ Codex-001/Codex-Cushman results on Xsum/MiniF2F
Unfortunately due to the unexpected shutdown of the OpenAI API, we were unable to obtain results for Codex-001 and Codex-Cushman on the Xsum and MiniF2F datasets.
§.§ Proofs
§.§.§ Proof of Theorem 2.1
This is true by definition for k=1. For k>1, let us assume that the number of categories L = 3. If the best generation g agrees with 𝐯 on only one of the elements, then wlog, let that be the 1st one. Then the agreement score is (p_1 + p'_2)/2 where p'_2 < p_2. Let the agreement score for a generation g' that does not agree at all with 𝐯 be (p'_1 + p”_2)/2. However if for example p_1 = 0.34, p'_1 = 0.32, p'_2 = 0.01, p”_2 = 0.32, then g' will be selected over g.
§.§.§ Proof of Theorem 2.2
It is true by assumption for k=1. Assume it is true for k=t. Then that means that given the self consistency assumption that a_t(𝐮_b, 𝐯) is the highest possible where a_t is the agreement until k=t. Then for t+1, we know that ∑_i ≠ b𝕀(𝐮_b^t+1 = 𝐮_i^t+1 is the highest (again by self-consistency assumption). Thus a_t+1 is also the highest proving the theorem.
§.§.§ Proof of Theorem 2.3
Formally, let 𝐮^j_i ∼ Bernoulli(p_j). Let b = max_i ∑^j p_j·𝐮^j_i (i.e. the sequence selected by our method). Then we want a bound on 𝔼[∑_j^k 𝐮_b^j].
Let q_i = ∑_j 𝐮^j_i. As all are iid, 𝔼[q_i] = ∑_j p_j. We can upper bound this by upper bounding 𝔼[max_i q_i]. Note that 𝐮^j_i is subgaussian with parameter 1/2 as it's bounded in [0, 1]. Thus q_i is subgaussian with parameter √(k)/2. Thus 𝔼[maxq_i - 𝔼[q_j]] ≤√(k logk/2)𝔼[maxq_i] ≤∑_i p_i + √(k logk/2)
Negating 𝐮, we can get the lower bound of 𝔼[maxq_i] ≥∑_i p_i - √(k logk/2) <cit.>
§.§ Simulation results
We setup our simulation as follows. Let d be the number of predicates, n the number of generations, and l the number of categories. Then for each predicate, we uniformly at random sample a categorical distribution and then generate 𝐮_i from that distribution. We then apply our criterion of picking the 𝐮_b that has the highest average fractional agreement with all other 𝐮_i and measure (1) the % of times we are able to retrieve the generation that has the best agreement with 𝐯 (2) the % agreement 𝐮_b has with the best possible generation out of the set. We vary d, l between 2 and 50, and n between 25 and 250. All our results are based on 1000 samples. The results are in Figures <ref> and <ref>.
For the first metric, we are able to retrieve the best generation a very high fraction of the time when l is <5 even when d goes to higher values. Even when l is larger, we are still able to retrieve the best generation a non-trivial fraction of times – and notably our performance does not degrade much as n goes from 25 to 250.
Turning our attention to the second metric, we are able to consistently get a generation close to the best generation. This is especially true for small l where even when d increases to large values, we are able to get close to 100% agreement with the best generation. Even at high values of l however, we get relatively good agreement with the best generation – especially compared to picking a random generation – a heuristic we consistently beat.
§.§ Experimental baselines
As mentioned earlier, we could not obtain Codex-001 and Codex-Cushman results on Xsum and MiniF2F due to the unexpected API shutdown. For the BLEU and Rouge-2 metrics, we report the values divided by 100. In terms of our baselines, we have
* Random selection - we randomly select a generation from the set of generations
* Ranking by mean log probability - we take the average log probability across the tokens in the generation and select the generation with the highest mean log probability
* Ranking using Centroid - we take the generation with the lowest mean distance to all other generations in our confidence weighted unigram space as used in WUCS.
* Coder Reviewer Ranker - This method has two variants – Normalized Reviewer (NR), and Normalized Coder Reviewer (NCR). NR computes the mean per token logp(x|y), where y is the generation and x is the prompt, and then ranks based on this metric. On the other hand, NCR merges the mean log probability ranking with NR, ranking according to logp(x|y) + logp(y|x). As the state of the art in code reranking, these methods represent a strong baseline.
§.§ Comparison with Coder-Reviewer Ranker
The results are in Table <ref>. Consensus-WUCS consistently outperforms NR and often surpasses NCR as well.
In the HumanEval dataset, Consensus-WUCS yields the highest accuracy for the Llama-13B and Llama-30B models. Similarly, in the MBPP-S dataset, Consensus-WUCS delivers superior performance for the Llama-13B and Llama-30B models, and closely matches the NCR for Codex models. In the MBPP dataset, the Consensus-WUCS method ranks as the best for Code-Cushman, Llama-13B, and Llama-30B models.
Notably in 40% of the experiments (6 out of 15), Consensus-WUCS outperforms all other methods, including the highly competitive NCR. Furthermore, Consensus-WUCS ranks second in 8 out of the 15 experiments, reinforcing its strong performance across diverse models and datasets.
Our results present evidence of the effectiveness of WUCS and Consensus-WUCS, which hold their own against much more heavyweight state-of-the-art methods and frequently deliver superior performance.
§.§ Ada model embeddings also give a boost
To understand how generalizable the intuition behind the GCS metric (as opposed to the UCS metric) is for other similarity functions, we took the generations and used the text-ada-embedding-002 model by OpenAI to generate embedding vectors for the generations. We then used cosine similarity between the generations as the similarity function and used GCS_Cosine Similarity to rank. The results are in Table <ref>. Using OpenAI embeddings as well results in improved performance over Random selection as well as mean log probability ranking validating our intuition that choosing the generation that is on average, the most similar to all other generations is a good ranking metric. That said, this particular similarity function underperforms UCS, especially for code generation so we did not investigate it further.
§.§ Normalizing inner product degrades performance
Neural generation models are well known to generate repetitive sequences <cit.>. In <cit.>, they modify the standard log-likelihood object for language models to minimize the probability of tokens immediately preceding the current token. This effectively pushes the model to generate unique new tokens and they show significant improvements in their model after they do this. If we normalize the inner product, then we would be effectively "canceling out" the contribution to the similarity score by having more unique tokens.
We evaluated the effect of normalizing the inner product by the vector norms. To understand better whether our performance is just an effect of selecting longer and more diverse sequences or whether the similarity metric itself is useful as well, we ran ablations where we evaluated ranking based on the longest sequence, as well as based on mean across the elements of 𝐯_i as defined in Section <ref> – which takes into account the sequence diversity. The results are in Table <ref> in the Supplement. Normalization results in a decline in performance. Furthermore neither ranking by the longest sequence nor ranking by sequence diversity is sufficient to give the results we see as neither result in a consistent improvement even against the Random selection baseline.
|
http://arxiv.org/abs/2307.04933v1 | 20230710230000 | On a generalization of symmetric edge polytopes to regular matroids | [
"Alessio D'Alì",
"Martina Juhnke-Kubitzke",
"Melissa Koch"
] | math.CO | [
"math.CO",
"math.AC",
"Primary: 52B40, Secondary: 52B20, 05B35, 13P10"
] |
[2020]Primary: 52B40; Secondary: 52B20, 05B35, 13P10.
Starting from any finite simple graph, one can build a reflexive polytope known as a symmetric edge polytope. The first goal of this paper is to show that symmetric edge polytopes are intrinsically matroidal objects: more precisely, we prove that two symmetric edge polytopes are unimodularly equivalent precisely when they share the same graphical matroid. The second goal is to show that one can construct a generalized symmetric edge polytope starting from every regular matroid. Just like in the usual case, we are able to find combinatorial ways to describe the facets and an explicit regular unimodular triangulation of any such polytope. Finally, we show that the Ehrhart theory of the polar of a given generalized symmetric edge polytope is tightly linked to the structure of the lattice of flows of the dual regular matroid.
Seeing quantum effects in experiments
H. J. Lewandowski
August 12, 2023
=====================================
§ INTRODUCTION
Symmetric edge polytopes are a class of centrally symmetric reflexive lattice polytopes which has seen a lot of interest in the last few years due to their fascinating combinatorial properties <cit.> and their connections to various branches of mathematics and physics <cit.>.
Given a finite simple graph G on vertex set V = [n] {1, 2, …, n}, the symmetric edge polytope associated with G is the lattice polytope
_G {±(_i - _j) |{i,j}∈ E(G)}⊆^|V|,
where _i denotes the i-th standard basis vector.
Equivalently, after assigning an arbitrary orientation to each of the edges of G, one has that
_G = [M_G | -M_G],
where M_G ∈^|V|×|E| is the signed incidence matrix of G with respect to the chosen orientation (i.e., the matrix whose (v,e)-entry is 1 if v is the head of e, -1 if v is the tail of e, and 0 otherwise). The matrix M_G also serves as a representation of the graphic matroid _G associated with G. Several objects associated with _G, for instance the facets or some triangulations, can be described via the combinatorial features of the graph G, and one can rephrase many of these characterizations in terms of the matroid _G only. This is not by accident; in fact, we will prove in <Ref> that two symmetric edge polytopes _G and _H are unimodularly equivalent precisely when the graphical matroids _G and _H are isomorphic. In particular, if G and H are both 3-connected, applying Whitney's 2-isomorphism theorem yields that _G and _H are unimodularly equivalent if and only if G and H are isomorphic. We remark that the characterization in <Ref> corrects an erroneous statement of Matsui, Higashitani, Nagazawa, Ohsugi and Hibi <cit.>: see <Ref> for the details.
It is tempting to ask what happens if we take the polytope defined by the convex hull of the columns of for a more general matrix M, and whether this object bears any relation to the matroid represented by M. The former question was investigated by Ohsugi and Hibi in <cit.>, while the latter appears to be new. In general, changing the representation of a given matroid and applying the above “symmetrization” will produce wildly different polytopes, so this question might seem to be too far-fetched at first sight. However, as we show in <Ref>, the construction described above does indeed yield a unique lattice polytope (up to some unimodular equivalence not involving any translation) when we consider any regular matroid and restrict to its (full-rank) weakly unimodular representations. We will call any polytope arising from such a setting a generalized symmetric edge polytope. Throughout the paper, if we need the concrete polytope associated with a specific representation M, we will denote it by _M; if instead it is enough for our purposes to just deal with the equivalence class, we will write _.
Most of the properties that make the usual symmetric edge polytopes pleasant are preserved in this wider environment: for instance, generalized symmetric edge polytopes are reflexive (as already observed in <cit.>) and terminal, and it is possible to describe their facets in a purely combinatorial fashion (<Ref>). We remark here that Kálmán and Tóthmérész have been working on a similar statement for extended root polytopes in the recent preprint <cit.>.
The polars of generalized symmetric edge polytopes are special instances of Lipschitz polytopes and enjoy a rich Ehrhart theory. More precisely, in the spirit of work by Beck and Zaslavsky <cit.>, we show that the lattice points in the k-th dilation of the polar ^Δ_ are in bijection with the (k+1)-cuts of or, equivalently, with the (k+1)-flows of the dual matroid ^* (<Ref>).
Finally, the existence of a regular unimodular triangulation for _M had already been proved by Ohsugi and Hibi <cit.>, while an explicit one had been provided in the case of graphs by Higashitani, Jochemko and Michałek <cit.>. We show that, via a careful analysis of signed circuits, it is possible to extend the latter result to generalized symmetric edge polytopes (<Ref>).
The paper is organized as follows. <Ref> contains some preliminaries about matroids, polytopes and toric ideals, while <Ref> is devoted to define generalized symmetric edge polytopes and prove that any two full-rank weakly unimodular representations of the same regular matroid will yield unimodularly equivalent polytopes (<Ref>).
<Ref> studies properties of generalized symmetric edge polytopes, including a partial converse to <Ref>. <Ref> focuses on the polytopes polar to generalized symmetric edge polytopes and their Ehrhart theory; the obtained results are then used to derive a facet description for generalized symmetric edge polytopes, extending the one for the graphical case from <cit.>.
<Ref> is devoted to the explicit description of a regular unimodular triangulation of any generalized symmetric edge polytope.
Finally, we collect some open questions and suggestions for future work in <Ref>.
§ PRELIMINARIES
§.§ Regular matroids, cuts and flows
The aim of this subsection is to briefly introduce regular matroids and their properties. We direct the reader to <cit.> for a more complete treatment and for general matroid terminology.
If is a matroid, we will denote by () and () the sets of its bases and circuits, respectively. If M is a matrix, we will sometimes write “basis/circuit of M” to refer to a basis/circuit of the matroid represented by M. In this case, the ground set of such a matroid will consist of the column indices of M.
Let M ∈^m × n be an integer matrix. We will say that M is:
* totally unimodular if the determinant of every square submatrix of M lies in {0, ± 1};
* weakly unimodular if the determinant of every square submatrix of M of size max{m,n} lies in {0, ± 1}.
A matroid of rank r > 0 is called regular if it satisfies any of the following equivalent properties:
* can be represented via a totally unimodular matrix;
* can be represented via a full-rank weakly unimodular matrix;
* is representable over any field.
The equivalence of (i) and (iii) is a well-known fact, see for instance <cit.>. We will now prove for clarity's sake that (i) and (ii) are equivalent: for a source in the literature, the reader can check <cit.>.
To see that (i) implies (ii) it is enough to show that, if is represented by a totally unimodular matrix, then it is also represented by a totally unimodular (and hence weakly unimodular) matrix of the form [I_r | D], where r is the rank of . For a proof of this claim, see <cit.>.
Let us now prove that (ii) implies (i). By assumption, there exists a full-rank matrix M ∈^r × n that is weakly unimodular and represents . We now proceed as in <cit.>: after choosing a basis for , we can shuffle the columns of M so that the elements of correspond to the first r columns. This amounts to multiplying M on the right by an (n × n)-permutation matrix P, an operation preserving the weakly unimodular property. Now consider the invertible submatrix N of MP obtained by taking the first r columns. Since MP is weakly unimodular and N is invertible, the determinant of the integer matrix N is either 1 or -1; in other words, N ∈GL_r(). By construction, one has that N^-1MP = [I_r | D] represents and is weakly unimodular; however, since it contains the identity matrix I_r, it must actually be totally unimodular (<cit.> or <cit.>), as desired.
We illustrate the content of the previous definition with an example that will also serve as a running example throughout.
Let be the rank 3 simple matroid with ground set [5], bases () = {123, 124, 134, 135, 145, 234, 235, 245}, and circuits () = {125, 345, 1234}, where we are using the shorthand i_1i_2… i_m for {i_1, i_2, …, i_m}. It is easy to check that is represented by the full-rank totally unimodular matrix
M = [ 1 0 0 -1 1; 0 1 0 -1 1; 0 0 1 -1 0 ],
and thus is regular. In fact, in this case is also graphic.
The assumption about the rank of being nonzero is not part of the usual definition of regular matroid in the literature: we include it to avoid nuisances with representability, see for instance <cit.>.
The class of regular matroids (including those of rank zero) is closed under duality and contains all graphic matroids.
We now introduce cuts and flows of a regular matroid, following Su and Wagner's treatment in <cit.>.
Let M ∈^r × n (where 0 < r ≤ n) be a full-rank weakly unimodular matrix. We define the lattice of integer cuts of M, denoted Γ(M), and the lattice of integer flows of M, denoted Λ(M), as
Γ(M) row(M) ∩^n,
Λ(M) (M) ∩^n.
The lattices of integer cuts and flows are orthogonal to each other with respect to the usual dot product. In particular, if A = [I_r | D] ∈^r × n (with 0 < r < n) is totally unimodular and A^* [-D^T | I_n-r], one has that
Λ(A) = Γ(A^*).
If ∈^n, the support of , denoted by (), is the set of indices i ∈ [n] such that v_i ≠ 0.
Given a full-rank weakly unimodular matrix M ∈^r × n with r ≤ n, we will call a flow ∈Λ(M) (respectively, a cut ∈Γ(M))
* nowhere-zero if λ_i ≠ 0 for every i ∈ [n], i.e., if () = [n];
* a k-flow (respectively, a k-cut) if |λ_i| < k for every i ∈ [n];
* a signed circuit or simple flow if it is a 2-flow and its support is a circuit of M. We denote the set of signed circuits of M by (M).
For a regular matroid , we are able to talk about the lattice of integer cuts of up to isometry: in fact, if M and M' are two full-rank weakly unimodular matrices representing , then the elements of Γ(M') correspond to elements of Γ(M) via multiplication by a signed permutation matrix (compare, e.g., <cit.>; their argument is stated for totally unimodular matrices, but goes through for full-rank weakly unimodular matrices as well). In particular, an element of Γ(M') will be a nowhere-zero cut, a k-cut or a signed circuit if and only if the corresponding element of Γ(M) is. Moreover, due to (<ref>), all the above statements go through for flows as well.
The matrix M from <Ref> has
* seventeen 2-cuts: (0,0,0,0,0), (1,0,0,-1,1), (-1,0,0,1,-1), (0,1,0,-1,1),
(0,-1,0,1,-1), (0,0,1,-1,0), (0,0,-1,1,0), (1,-1,0,0,0), (-1,1,0,0,0),
(1,0,-1,0,1), (-1,0,1,0,-1), (0,1,-1,0,1), (0,-1,1,0,-1), (1,-1,1,-1,0),
(-1,1,-1,1,0), (1,-1,-1,1,0), (-1,1,1,-1,0).
* seven 2-flows: (0,0,0,0,0), (1,1,0,0,-1), (-1,-1,0,0,1), (0,0,1,1,1),
(0,0,-1,-1,-1), (1,1,1,1,0), (-1,-1,-1,-1,0).
* six signed circuits: all the 2-flows except for the origin.
We record here for further reference some useful facts:
Let M be a full-rank weakly unimodular matrix.
* If ∈Λ(M) and () is a circuit of M, then every coordinate of has the same absolute value.
* If ∈(M), then there are exactly two signed circuits (differing by a global sign) with support .
Part (i) can be derived directly from <cit.> and constitutes a strengthening of <cit.>. The proof of part (ii) is almost verbatim the same as the one of <cit.>, using part (i) instead of <cit.>.
Let be a regular matroid with ground set E. If is a basis of and e ∈ E ∖, then the fundamental circuit of e with respect to is the unique circuit (e,) ∈() contained in ∪{e}. Note that e ∈(e,).
If, moreover, M is a full-rank weakly unimodular representation of , then by <Ref> there is a unique signed circuit (e,) ∈Λ(M) supported at (e,) whose e-th entry equals 1. We will call such a signed circuit the fundamental signed circuit of e with respect to and M.
Let and M be as in <Ref>. Then, for = {1,2,3}, one has that (4,) = (1,1,1,1,0) and (5,) = (-1,-1,0,0,1).
Let M ∈^r × n (where 0 < r ≤ n) be a full-rank weakly unimodular matrix and assume that the first r columns of M are linearly independent. Then, for any a_1, …, a_r ∈, there exist unique a_r+1, …, a_n ∈ such that (a_1, …, a_n) ∈Γ(M). In other words, there exists a unique cut γ∈Γ(M) having a_1, …, a_r as its first r entries.
Call the regular matroid represented by M.
If r=n, then Γ(M) = ^n and the claim is true. Assume now that r < n.
To prove existence, define γ∈^n in the following way:
* γ_i = a_i for every i ∈ [r];
* for every j ∈{r+1, …, n}, we determine γ_j by imposing that γ·(j, [r]) = 0, where (j, [r]) is the fundamental signed circuit of j with respect to the basis [r] of and the representation M.
To prove that the integer vector γ∈^n we have just defined is indeed a cut, it is enough to show that γ∈row(M); but since row(M) and (M) are orthogonal with respect to the standard dot product, this amounts to proving that γ· = 0 for every ∈(M). Since the fundamental signed circuits of M with respect to [r] form an -basis of (M) (being n-r many linearly independent vectors by construction), the claim follows.
To prove uniqueness, assume there is another cut γ' such that γ'_i = a_i for every i ∈ [r], and consider βγ' - γ∈Γ(M). By assumption, β_i = 0 for every i ∈ [r]. Since row(M) = (M)^⊥, it follows that β·λ = 0 for every λ∈(M). In particular, for every j ∈{r+1, …, n}, one has that β·(j, [r]) = 0, and thus β_j = 0. This proves that γ' = γ.
Given a matroid , denote by ^∘ the matroid obtained from by deleting all its loops. Su and Wagner proved in <cit.> that knowing the lattice of integer cuts of is enough to determine ^∘ up to isomorphism. In particular, if we know beforehand that is loopless (for instance, if is simple), we can reconstruct completely from the data of its lattice of integer cuts. This idea will serve as a blueprint for the constructions in this paper.
§.§ Polarity
Let P ⊆^d be a full-dimensional lattice polytope with ∈P∩^d (here P denotes the interior of P with respect to the Euclidean topology).
We recall that the polar of P is the polytope
P^Δ{∈^d |·≤ 1 for every ∈ P},
where we are using the usual dot product to identify ^d and its dual (^d)^*.
The polar P^Δ will not be a lattice polytope in general. If P^Δ happens to be a lattice polytope, then P is called reflexive.
For the rest of this subsection we fix a full-dimensional reflexive polytope P ⊆^d with P∩^d = {}.
If ∈ P^Δ∩^d, we denote by F_ the face of P obtained as {_i |_i · = 1}, where the _i's are the vertices of P.
Indeed, the polytope P lies entirely inside one of the halfspaces defined by the hyperplane H_{∈^d |· = 1}.
By polarity, facets of the polytope P correspond to the vertices of the polar polytope P^Δ; in particular, any facet of P will be of the form F_ for some ∈ P^Δ∩^d, and such a will be a vertex of P^Δ.
§.§ Toric ideals
We introduce some basic notation about toric ideals. For the concepts not explained here and to get further insight, see for instance <cit.>.
If M ∈^r × n is an integer matrix with columns _1, …, _n, we will denote by I_M the toric ideal associated with M, i.e. the kernel of the map
π K[x_1, …, x_n] → K[t_1^± 1, …, t_r^± 1]
x_i ↦𝐭^_i t_1^m_1,it_2^m_2,i… t_r^m_r,i,
where K is a field. Every ∈^n can be uniquely written as ^+ - ^-, where ^+ and ^- are in ^n and have disjoint supports. Any column vector ∈(M) ∩^n gives rise to a binomial ^^+ - ^^-∈(π), and the ideal I_M is generated by binomials of this form. In what follows, with a slight abuse of notation, we will use the expression “signed circuit” to denote both an element λ∈{0, ± 1}^n as in <Ref> and the associated binomial ^^+ - ^^- in (π).
When M is full-rank weakly unimodular, the toric ideal I_M is remarkably well-behaved: in fact, the set (M) of signed circuits is a universal Gröbner basis for I_M (and hence, in particular, the signed circuits of M generate I_M).
Actually, an even stronger result is true, as (M) turns out to be the Graver basis of I_M <cit.>. (In fact, since two signed circuits only differing by a global sign give rise to the same binomial up to sign, one usually picks a representative for every pair; in particular, the Graver basis of I_M will have cardinality |(M)| = 1/2|(M)|.)
Let M be as in <Ref>. The enumeration of signed circuits in <Ref> shows that the polynomials x_1x_2-x_5, x_3x_4x_5-1 and x_1x_2x_3x_4 - 1 are the Graver basis (and a universal Gröbner basis) of the toric ideal I_M.
Throughout the paper, when we say that a certain polynomial inside a polynomial ring is homogeneous, we are using the standard grading: i.e., each variable has degree 1. We record here for further reference a useful observation.
Let B ∈^m × n and let B' ∈^(m+1) × (n+1) be the matrix defined via
b'_ijb_ij if i≤ m and j ≤ n
0 if i≤ m and j = n+1
1 if i=m+1 .
Let I_B ⊆ K[x_1, …, x_n] and I_B'⊆ K[x_1, …, x_n, z] be the respective toric ideals (here x_i corresponds to the i-th column and z to the (n+1)-st, when available). Then I_B' = I_B^hom, where the homogenization is taken with respect to the variable z.
Let us first prove that I_B'⊆ I_B^hom. The toric ideal I_B' is generated by the set of its primitive binomials, i.e. its Graver basis. Let f be a primitive binomial of I_B'. Due to primitivity, the variable z can appear at most on one side of the binomial; without of loss of generality, we can hence write f = ^^+ - ^^-z^k, where = ^+ - ^- ∈^n, k ≥ 0 and (, k) ∈(B'). By construction, ∈(B) and f is homogeneous; more precisely, f is the homogenization of a binomial in I_B with respect to the variable z. It follows that I_B'⊆ I_B^hom.
Let us now prove that I_B^hom⊆ I_B'. By <cit.>, in order to find a generating set for I_B^hom it is enough to homogenize a set of polynomials forming a Gröbner basis of I_B with respect to a graded monomial order. Primitive polynomials provide such a set: in fact, the Graver basis of I_B contains the universal Gröbner basis of I_B <cit.>. Let g = ^^+ - ^^- be a primitive binomial in I_B. We can assume without loss of generality that k |^+| - |^-| ≥ 0. By construction, the homogenized polynomial ^^+ - ^^-z^k lies in I_B'. This shows that I_B^hom⊆ I_B'.
§ UNIQUENESS UP TO UNIMODULAR EQUIVALENCE
The main aim of this section is to describe how to extend the definition of a symmetric edge polytope from the context of graphs to that of regular matroids. Let us first fix some notation:
For any integer matrix M ∈^r × n with 0 < r ≤ n, we denote by _M the lattice polytope of ^r obtained as [ M | -M ].
For F ∈GL_r(), we denote by ψ_F: ℝ^r →ℝ^r the affine map sending to F.
It is our goal to show that any two full-rank weakly unimodular representations of a regular matroid produce the same lattice polytope (in the sense specified in <Ref>) up to unimodular equivalence, and the same holds for the polytopes obtained via polarity. More precisely, we show the following:
Let be a regular matroid of rank r > 0 on n elements and let M_1, M_2 ∈ℝ^r × n be two full-rank weakly unimodular representations of . Then there exists F∈GL_r() such that
_M_2 = ψ_F(_M_1) and _M_2^Δ = ψ_(F^T)^-1(_M_1^Δ).
We will show how to handle the case when the matrices representing are not full-rank in <Ref>. Moreover, a partial converse to <Ref> will be proved later, see <Ref>.
Pick two weakly unimodular full-rank (r × n)-matrices M_1 and M_2 both representing . For each i ∈{1,2}, write _i _M_i. Multiplying M_i on the right by a (signed) permutation matrix has no effect on the polytope _i: permuting the columns just permutes the list L of points we are taking the convex hull of, and changing the sign of a column is harmless because the list L consists of the columns of both M_i and -M_i. After some permutation of the columns of M_1 and M_2, we can hence assume without loss of generality the following two statements:
* the identity map [n] → [n] yields an isomorphism between the matroids represented by M_1 and M_2;
* the submatrices N_1 and N_2 obtained by selecting the first r columns of respectively M_1 and M_2 are both invertible.
Proceeding as in the proof of “(ii) implies (i)” in <Ref>, we can now multiply each M_i on the left by N_i^-1∈GL_r(), obtaining the totally unimodular matrix [I_r | D_i]. Since the identity map still yields an isomorphism between the matroids represented by [I_r | D_1] and [I_r | D_2], we can apply <cit.> to get that D_1 and D_2 are congruent modulo 2, and hence so are [I_r | D_1] and [I_r | D_2]. Since [I_r | D_1] and [I_r | D_2] are both totally unimodular, we are now in the position to use Camion's signing lemma <cit.>: i.e., we can obtain the matrix [I_r | D_2] by changing the signs of some rows and columns of [I_r | D_1]. In other words, there exist diagonal matrices R ∈GL_r() and C ∈GL_n() with only 1's and -1's on the diagonal and such that [I_r | D_2] = R · [I_r | D_1] · C.
Now let F N_2 · R · N_1^-1∈GL_r(). It follows from the discussion above that _2 = ψ_F(_1), as desired (note that C, being a signed permutation matrix, does not enter the picture).
The polar statement can now be derived like this:
_2^Δ = {∈^r |·≤ 1 for every ∈_2}
= {∈^r |·≤ 1 for every ∈ψ_F(_1)}
= {∈^r |· F≤ 1 for every ∈_1}
= {∈^r | F^T·≤ 1 for every ∈_1}
= {(F^T)^-1∈^r |·≤ 1 for every ∈_1}
= ψ_(F^T)^-1(_1^Δ).
Let be the uniform matroid U_2,3. The two full-rank totally unimodular matrices
M_1 [ 1 0 1; 0 1 1 ] and M_2 [ 1 0 1; 0 1 -1 ]
both represent . Changing the signs of both the second row and the second column of M_1 yields M_2; in formulas,
[ 1 0 1; 0 1 -1 ] = [ 1 0; 0 -1 ][ 1 0 1; 0 1 1 ][ 1 0 0; 0 -1 0; 0 0 1 ].
It is easy to verify that the two polytopes _1 and _2 are unimodularly equivalent, as guaranteed by <Ref>.
The usual symmetric edge polytope associated with a graph G is defined as _A_G, where A_G is any signed incidence matrix associated with G. The matrix A_G provides a totally unimodular representation of the graphic matroid _G, but is not full-rank. However, this is not really an issue, as we now explain.
Let be a regular matroid of rank r > 0 and let M ∈^m × n be a totally unimodular representation of with m > r. Possibly after permuting the columns of M, we can assume without loss of generality that the first r columns of M are linearly independent. Pivoting repeatedly we can then reach a matrix
M' [ I_r D; 0_m-r, r 0_m-r, n-r ],
which will again be totally unimodular by <cit.>.
The two polytopes _M and _M' are unimodularly equivalent, and projecting onto the first r coordinates shows that _M' is in turn unimodularly equivalent to _M”, where M”[ I_r | D ] is a full-rank totally unimodular representation of .
Let G = C_3 be the cycle graph on three vertices and pick
A_G [ 1 1 0; -1 0 -1; 0 -1 1 ].
Then _A_G is the symmetric edge polytope _G. Successive row operations on A_G yield that
[ 1 -1 0; 0 1 0; 0 1 1 ][ 1 0 0; 1 1 0; 0 0 1 ][ 1 1 0; -1 0 -1; 0 -1 1 ] = [ 1 0 1; 0 1 -1; 0 0 0 ],
and so _G is unimodularly equivalent to the full-dimensional polytope _M_2⊆^2, with M_2 as in <Ref>.
Selecting a directed spanning tree inside a connected finite simple graph G on r+1 vertices and n edges yields an explicit full-rank totally unimodular representation for ℳ_G in the following way (compare <cit.>):
* fix an orientation for each edge of G;
* pick a spanning tree 𝒯 for G and number its edges from 1 to r;
* assign the i-th standard basis vector _i ∈ℝ^r to the i-th edge in 𝒯 (taken with the orientation selected at the beginning);
* for any edge e⃗ in G taken with its orientation, consider the unique directed path 𝒫_e⃗ from the starting vertex to the ending vertex that only uses edges of 𝒯;
* assign to 𝒫_e⃗ the vector _e⃗ = (λ_1, …, λ_r), where λ_i equals 1 if the i-th edge of 𝒯 appears in 𝒫_e⃗ with its “correct” orientation, -1 if it is traversed backwards, 0 if it does not appear at all.
Putting together all the vectors _e⃗ as columns of a matrix yields a full-rank totally unimodular matrix [I_r | D] representing ℳ_G. By the results in this section, this also produces a full-dimensional polytope _[I_r | D]⊆^r unimodularly equivalent to the symmetric edge polytope of G (compare this to <Ref>). If the graph G is not connected, one can select a directed spanning tree for each connected component and argue analogously.
§ FIRST PROPERTIES OF GENERALIZED SYMMETRIC EDGE POLYTOPES
Due to <Ref>, if we are given a regular matroid of rank r>0 on n elements, we know how to define a full-dimensional polytope _⊆^r which is defined up to some unimodular equivalence not involving any translation. We now wish to prove some results about _: in the proofs we will often need to fix a specific full-rank totally unimodular representation of .
We begin by noting that the polytope _ does not see potential loops or parallel elements inside the matroid , in analogy to the usual symmetric edge polytopes (see <cit.>).
Let be a regular matroid of rank r>0 and let M be a full-rank weakly unimodular matrix representing . Let M be the submatrix of M obtained by keeping only the nonzero columns _i such that _i ≠±_j for every j<i. Then _M = _M, since the redundant columns in M do not affect the structure of _M and, as always lies in the interior of _M, the same holds for the zero columns.
Hence, the polytope _ does not see loops or parallel elements of ; as a consequence, we can replace by its simplification[In <cit.> this is called the combinatorial geometry of .] .
In the setting of <Ref>, we will say M has irredundant columns if M = M, i.e., if the regular matroid represented by M is simple.
We now wish to collect some properties of generalized symmetric edge polytopes. We point out that parts (i) to (iii) of <Ref> below were essentially already known to Ohsugi and Hibi <cit.>.
Let be a regular matroid of rank r>0. The following properties hold:
* _ is centrally symmetric;
* (_) = rk();
* _ is reflexive;
* _ is terminal, i.e., the only points of _ with integer coordinates are its vertices and the origin;
* The vertices of _ are twice as many as the atoms of the lattice of flats of . In particular, if is simple, every antipodal pair of vertices of _ corresponds to an element in the ground set of .
Part (i) is immediate by definition, no matter which representation M we choose for .
Due to <Ref> we know that, if M and M' are two full-rank weakly unimodular matrices representing , we can go from _M to _M' and from _M^Δ to _M'^Δ via unimodular maps that do not involve any translation. In particular, it is enough to prove statements (ii)–(v) for _M, where M is a full-rank totally unimodular r × n matrix representing .
Part (ii) is now immediate and, together with part (i), implies that the origin lies in the interior of _M; hence, the polar polytope _M^Δ is well-defined, and an ℋ-presentation for it is given by ^T 𝐱≤1. Since M is totally unimodular, so is [ M | -M ]^T; the polar _M^Δ must then be a lattice polytope (see for instance <cit.>), and hence _M is reflexive. This proves part (iii).
As regards part (iv), pick a lattice point 𝐱 = (x_1, …, x_r) of _M different from the origin. Then we can write 𝐱 = ∑_i λ_i 𝐯_i, where λ_i > 0, ∑_i λ_i = 1, and the vertices 𝐯_i form a set of pairwise distinct nonzero columns of . If r=1, the claim is obvious. Assume hence that r > 1. Since 𝐱≠ and _M is a centrally symmetric subset of the hypercube [-1,1]^r, we can assume without loss of generality that x_1 = 1. Then the first coordinate of every 𝐯_i must also equal 1. If 𝐱 = 𝐯_1, there is nothing to prove. Assume otherwise. Then there is a coordinate (without loss of generality, the second one) in which 𝐱 and 𝐯_1 differ. This can happen only if x_2 = 0 and (𝐯_1)_2 ∈{1, -1}. But then there must exist some j > 1 such that (𝐯_j)_2 = -(𝐯_1)_2. As a consequence, the totally unimodular matrix contains the submatrix
[ 1 1; (𝐯_1)_2 -(𝐯_1)_2 ]
with determinant 2 or -2. This yields a contradiction.
Finally, it is enough to prove the statement of part (v) when is simple. When this is the case, then M has irredundant columns; denote by _1, …, _2n the columns of . Assume by contradiction that a column of (without loss of generality, the first one) can be expressed as a convex combination of the other ones; i.e., _1 = ∑_j ∈ Jλ_j _j for some J ⊆{2, 3, …, 2n}, λ_j > 0, ∑_j ∈ Jλ_j = 1. Since M has no zero columns and _M is a centrally symmetric subset of the hypercube [-1,1]^r, we can assume without loss of generality that (_1)_1 = 1, and this in turn implies that (_j)_1 = 1 for every j ∈ J. Arguing in a similar way to part (iv), one can then build a submatrix of with determinant 2 or -2, which in turn yields the desired contradiction.
Let and M be as in <Ref>. Then _M is the polytope shown in <Ref>. One has that _M = rk(M) = 3; since the matroid is simple, the lattice points of _M are the origin and the columns of .
<Ref> gives us the tools to establish a partial converse to <Ref>.
Let M, N ∈^r × n (where 0 < r ≤ n) be two full-rank weakly unimodular matrices with irredundant columns, and assume that the polytopes _M and _N are unimodularly equivalent. Then there exist F ∈GL_r() and a signed permutation matrix P ∈^n × n such that N = FMP. In particular, N and M represent the same simple regular matroid .
By assumption there exist F ∈GL_r() and ∈^r such that _N = ψ_F(_M) +. Since 0 is the only interior point of the reflexive polytopes _N and _M, it must be that = 0, so that no translation is actually involved. Moreover, one can easily check that _N = ψ_F(_M) = _FM.
Since the matrices FM and N are both full-rank and weakly unimodular, <Ref>(v) implies that the columns of both and correspond to the vertices of _FM = _N. As a consequence, the matrices FM and N can only differ by a signed permutation of their columns; in other words, there exists a signed permutation matrix P ∈^n × n such that N = FMP, as desired.
As a consequence, we obtain that the matroidal setting is the “right” one to study even the usual symmetric edge polytopes.
Let G and H be finite simple graphs. Then the symmetric edge polytopes _G and _H are unimodularly equivalent if and only if the graphic matroids _G and _H are isomorphic.
The “if” part follows from <Ref> and <Ref>. The “only if” part follows from <Ref> and <Ref>, noting that any signed incidence matrix of a simple graph has irredundant columns by construction.
Let G and H be finite simple 3-connected graphs. Then the symmetric edge polytopes _G and _H are unimodularly equivalent if and only if G and H are isomorphic.
This follows directly from <Ref> and Whitney's 2-isomorphism theorem <cit.>.
It was claimed in <cit.> that, if G and H are finite simple graphs and G is 2-connected, then _G and _H are unimodularly equivalent if and only if G and H are isomorphic. Unfortunately, this claim is erroneous and affects the validity of <cit.> as well: indeed, there exist non-isomorphic 2-connected graphs giving rise to the same graphic matroid, and thus having unimodularly equivalent symmetric edge polytopes by <Ref>. The key to build such objects is the Whitney twist operation, see <cit.>. We provide here an explicit example.
Let G and H be the 6-vertex graphs depicted in <Ref>. Both G and H are 2-connected; moreover, since the vertex a has degree 4 in G and all vertices have degree at most 3 in H, the graphs G and H are not isomorphic. After matching the i-th letter of the English alphabet with the i-th coordinate of ℝ^6, consider the 5-dimensional symmetric edge polytopes _G and _H in ℝ^6. Letting
F = [ 0 -1 -1 0 0 0; 0 1 0 0 0 0; 1 1 2 2 1 1; 0 0 0 0 1 0; -1 -1 -1 -1 -1 0; 1 1 1 0 0 0 ]∈GL_6(),
one checks that the unimodular map ψ_F: ℝ^6 →ℝ^6 sending to F transforms _G into _H, and thus _G and _H are unimodularly equivalent.
§ FACETS OF _ AND THE EHRHART THEORY OF THE POLAR POLYTOPE
After defining generalized symmetric edge polytopes and investigating their first structural properties, it is our next goal to find a combinatorial characterization of their facets. In order to achieve this, it is fruitful to focus on the Ehrhart theory of the polar polytope. Unless specified differently, in this section we will only consider simple regular matroids of positive rank, so that by <Ref>(v) the vertices of _ will correspond to the columns of for any full-rank weakly unimodular matrix M representing .
Inspired by work of Beck and Zaslavsky <cit.>, we begin by providing a description of the lattice points in the k-th dilation of ^Δ_.
Let k be a positive integer, be a simple regular matroid of rank r > 0 and M be a full-rank weakly unimodular matrix representing . Then the map
(k ·_M^Δ) ∩^r →{(k+1)-cuts of M}
↦ M^T
is a bijection.
Let us first describe in more detail the polar polytope _M^Δ. A facet description of _M^Δ is given by ^T ≤1, which in turn implies that
k·_M^Δ = {𝐮∈^r : -k·1≤ M^T≤ k·1},
where the inequalities are meant to be taken componentwise. This implies that, if ∈ (k ·_M^Δ) ∩^r, then (M^T) is an element of row(M) ∩^n such that |(M^T)_i| ≤ k for every i ∈ [n]. This means precisely that M^T is a (k+1)-cut of M.
Vice versa, let γ be a (k+1)-cut of M. Since γ∈row(M), there exists ∈^r such that M^T = γ. Since M is full-rank, the linear map ^r →^n defined by M^T is injective, and thus is uniquely determined. Since satisfies the inequalities in (<ref>), we have that is a lattice point of k ·_M^Δ, and this finishes the proof.
Let M be as in <Ref>. Then the polar polytope ^Δ_M is shown in <Ref>. The lattice points of ^Δ_M are obtained from the 2-cuts in <Ref> by throwing away the last two coordinates.
A consequence of <Ref> is that the lattice of cuts Γ(M) can be thought of as the union of the lattice points of k ·_M^Δ as k varies in (with the convention that 0 ·_M^Δ = {}). This gives us an interpretation of the lattice of cuts as a “limit object”.
It follows from the argument in <Ref> that, if is a lattice point of _M^Δ, then
F_ = ({M_i | M_i · = 1}∪{-M_i| M_i · = -1})
is a face of _M with supporting hyperplane
H_ = {∈^r |· = 1}
(where we are using the fact that the columns of correspond to the vertices of _M). Since by <Ref> γ M^T is a 2-cut of M, we can rewrite F_ in the following way:
F_ = ({M_i |γ_i = 1}∪{-M_i|γ_i = -1}).
In other words, the 2-cut γ = M^T acts as an indicator vector for F_, in the following sense: the i-th entry of γ equals +1 (respectively, -1) if and only if the vertex M_i (respectively, -M_i) belongs to F_.
Next, we are going to define a partial order on {0, ±1}-tuples that will enable us to give a first characterization of the facets of _.
Let , ∈{0, ±1}^m. We will write that ≼ if for every i ∈ [m] it holds that u_i = 0 or u_i = v_i. Equivalently, ≼ is the partial order induced componentwise by the relations “0 ≺ +1”, “0 ≺ -1” and “+1 and -1 are incomparable”.
Note that {0, ± 1}^m equipped with the partial order from <Ref> is isomorphic to the face lattice of the m-dimensional cross-polytope: see for example <cit.>. More specifically, the isomorphism maps γ∈{0, ± 1}^m to the face obtained as ({_i |γ_i = 1}∪{-_i |γ_i = -1}). This foreshadows the upcoming characterizations of the facets of _.
A direct consequence of the definition of ≼ is that F_⊆ F_ if and only if M^T≼ M^T. This immediately yields a first characterization of the facets of _ when is simple.
Let be a simple regular matroid of positive rank and let M be a full-rank weakly unimodular representation of . Then the facets of _M are the faces F_ of _M for which M^T is a ≼-maximal 2-cut of M.
The facet description in <Ref> is not completely satisfactory. Our next goal is to develop an alternate characterization that will be the “right” generalization of the description obtained by Higashitani, Jochemko and Michałek <cit.> for classical symmetric edge polytopes: see <Ref> below for a more detailed discussion.
Let be a regular matroid of positive rank and let M be a full-rank weakly unimodular representation of . We will say that the cut γ∈Γ(M) is spanning if the support of γ contains a basis of .
Let be a simple regular matroid of rank r > 0 and let M be a full-rank weakly unimodular representation of . Then the facets of _M are the faces F_ of _M for which M^T is a spanning 2-cut of M.
Let us recall once more that, since is simple, the vertices of _M are in bijection with the columns of by <Ref>(v).
Let us show that, if γ = M^T is a spanning 2-cut of M, then F_ is a facet of _M. Since γ is spanning, by the discussion after <Ref> we know that the face F_ contains r linearly independent vertices. Since ∉ F_, such vertices are also affinely independent; but then, since (_M) = r by <Ref>(ii), it follows that F_ must be a facet.
Let us now prove that all facets of _M arise in this fashion. Let G be a facet of _M. Since (_M) = r, the facet G must contain r linearly independent vertices _1, …, _r; these will correspond to certain columns of M or -M. If the i-th column of M appears among the _j's, set γ_i = 1; if the i-th column of -M does, set γ_i = -1. Possibly after some relabeling, we can assume without loss of generality that γ_i ≠ 0 for every i ∈ [r]. By <Ref>, there exists a unique cut γ compatible with the above assignments; moreover, such a cut is spanning by construction. It only remains to show that γ is a 2-cut. By polarity, the facet G corresponds to a vertex ' of the polar polytope _M^Δ; it then follows from <Ref> that
G = F_' = ({M_i |γ'_i = 1}∪{-M_i|γ'_i = -1}),
where γ' = M^T' is a 2-cut of M. Since γ and γ' coincide on a basis of , it follows from the uniqueness of the cut in <Ref> that γ = γ' and hence γ is a a 2-cut, as desired.
Let M be as in <Ref>. Twelve of the seventeen 2-cuts enumerated in <Ref> are spanning: these are (1,0,0,-1,1), (-1,0,0,1,-1), (0,1,0,-1,1), (0,-1,0,1,-1), (1,0,-1,0,1), (-1,0,1,0,-1), (0,1,-1,0,1), (0,-1,1,0,-1), (1,-1,1,-1,0), (-1,1,-1,1,0), (1,-1,-1,1,0), (-1,1,1,-1,0).
Hence, _M has twelve facets, and each of the spanning 2-cuts serves as an indicator vector for one of them: for instance, the 2-cut (1,0,0,-1,1) corresponds to the facet obtained as the convex hull of _1, _1+_2+_3 and _1+_2 (respectively, the first, minus the fourth, and the fifth column of M).
Some words are needed in order to explain in which sense <Ref> generalizes the characterization of facets obtained by Higashitani, Jochemko and Michałek for classical symmetric edge polytopes <cit.>. If G is a connected graph, facets of the symmetric edge polytope _G were shown to be in bijection with integer vertex labelings such that
(i) if i and j are adjacent in G, then their labels differ at most by one;
(ii) the subgraph of G consisting of the edges {i, j} whose vertex labels differ exactly by one contains a spanning tree of G.
(For the statement to be precise, one further needs to identify any two vertex labelings that differ by a fixed constant value on each vertex.) The first author, Delucchi and Michałek observed in <cit.> that, after fixing an orientation of G, such a characterization is equivalent to asking for integer edge labelings such that
(a) each label is either 1, 0 or -1;
(b) the sum of the labels on each oriented cycle of G is zero;
(c) the set of edges with nonzero labels contains a spanning tree of G.
This last characterization corresponds to <Ref> in the special case when is the graphic matroid associated with G and M is the signed incidence matrix associated with the chosen orientation of G (although, to be fully precise, such a matrix is not full-rank). Indeed, labeling the (oriented) edges of G can be thought of as labeling the columns of the matrix M. More in detail, condition (b) can be expressed more succinctly by saying that the desired edge labelings are cuts of M, while conditions (a) and (c) further specify that they must be spanning 2-cuts.
Comparing the facet characterization from <Ref> with the one found in <Ref> immediately yields the following corollary:
Let be a simple regular matroid of positive rank and M a full-rank weakly unimodular representation of . Then a 2-cut of M is spanning if and only if it is ≼-maximal.
The next result generalizes <cit.>. We recall that a matroid is said to be bipartite if all of its circuits have even cardinality.
Let be a simple regular bipartite matroid of rank r > 0 and let M be a full-rank weakly unimodular representation of .
Then the facets of _M are in bijection with the nowhere-zero 2-cuts of M.
By <Ref>, it is enough to prove that the spanning 2-cuts of M are exactly the nowhere-zero 2-cuts of M. Clearly, every nowhere-zero 2-cut must be spanning.
For the reverse containment, let n be the number of elements in the ground set of . If r=n, then is the uniform matroid 𝒰_n,n, the polytope _M is unimodularly equivalent to the n-dimensional cross-polytope, and its 2^n facets correspond to the nowhere-zero elements of row(M) ∩{0, ± 1}^n = {0, ± 1}^n (see also <Ref>).
Assume now that r < n. Let γ be a spanning 2-cut of M and assume without loss of generality that γ_i ≠ 0 for every i ∈ [r]. Now pick any j ∈{r+1, …, n} and consider the fundamental signed circuit (j, [r]), whose support has even cardinality because of the bipartite assumption. Since γ·(j, [r]) = 0, one has that 0 is the sum of γ_j and an odd number of elements in {+1, -1}. For parity reasons, it follows that γ_j ≠ 0, which proves the claim.
§ A REGULAR UNIMODULAR TRIANGULATION FOR _
It follows from a result of Ohsugi and Hibi <cit.> that the polytope _ always admits a regular unimodular triangulation. The aim of this section is to find an explicit description generalizing what Higashitani, Jochemko and Michałek found in the context of symmetric edge polytopes <cit.>. Since the desired characterization involves signed circuits (see <Ref>), our results will be expressed in terms of a fixed full-rank weakly unimodular representation of the given (simple) regular matroid .
If M is a full-rank weakly unimodular matrix, then is as well. It will be useful to describe the signed circuits of the latter in terms of the former. To achieve this goal, we need to introduce some more notation.
Let J ⊆ [n]. We will denote by η_J the injective map ^n →^2n sending (λ_1, …, λ_n) to (_1, …, _2n), where for every i ∈ [n]
_i
0 i∈ J
λ_i i∉ J
and _n+i
-λ_i i∈ J
0 i∉ J.
Basically, given an integer vector, the map η_J changes the sign of the entries indexed by an element of J, and then moves them to the second half of an integer vector twice as long. We will sometimes refer to this operation as a promotion. Note that η_J restricts to a map {0, ±1}^n →{0, ±1}^2n.
Note that ∈^2n is in the image of η_J precisely when _i = 0 for every i ∈ J and _n+i = 0 for every i ∉ J. In particular, if the support of ∈^2n is contained in the support of ∈im(η_J), then ∈im(η_J).
Let J ⊆ [n] and let M ∈^r × n (where 0 < r ≤ n) be a full-rank weakly unimodular integer matrix. Then ∈(M) if and only if η_J() ∈([M | -M]).
Let ∈^n. By construction, one has that
M = ∑_i ∈ [n]λ_i (M_i) = ∑_i ∈ [n](_i - _n+i) (M_i) = ∑_i ∈ [n]_i (M_i) + ∑_i ∈ [n]_n+i(-M_i) = η_J().
In particular, ∈(M) if and only if η_J() ∈(), and is a 2-flow if and only if η_J() is. To prove the claim, we still need to show that () is a circuit of M if and only if (η_J()) is a circuit of .
The definition of η_J implies immediately that, if () is not minimally dependent, then (η_J()) is not minimally dependent either. Conversely, assume (η_J()) is not minimally dependent. Then there exists ∈^2n such that = and () ⊆(η_J()). By <Ref>, belongs to the image of η_J and hence () is not minimally dependent, since (η_J^-1()) ⊆(). This proves the claim.
Provided that M does not contain any zero column, the signed circuits of come in two flavors: on the one hand, we have the ones of the form ±(_i + _n+i) (reflecting the relation M_i + (-M)_i =), while on the other hand we have those obtained by promoting a signed circuit of M. This is the content of the technical lemma below.
Assume M ∈^r × n (where 0 < r ≤ n) is a full-rank weakly unimodular matrix not containing any zero column. Then
([M | -M]) = {±(_i + _n+i) : i ∈ [n]}∪⋃_J ⊆ [n]η_J((M)).
Let us first prove that the right hand side of (<ref>) consists of signed circuits of . This is clear for ±(_i + _n+i), since
_i + _n+i = M_i + (-M)_i =
and M does not contain any zero column by hypothesis. Moreover, by <Ref>, η_J() is a signed circuit of for every choice of J ⊆ [n] and ∈(M).
Let us now prove that every signed circuit of arises as in the right-hand side of (<ref>). Let = (_1, …, _2n) ∈{0, ± 1}^2n be a signed circuit of . If there exists i ∈ [n] such that _i_n+i≠ 0 then, by support minimality, must be equal to _i + _n+i up to sign. Assume then that _i_n+i = 0 for every i ∈ [n], and let J {i ∈ [n] : _n+i≠ 0}. By <Ref>, one has that = η_J() for some ∈{0, ± 1}^n; moreover, by <Ref>, such is a signed circuit of M. This finishes the proof.
We remark that the unimodularity assumption is not really crucial for Lemmas <ref> and <ref>: one could prove similar statements by substituting “signed circuits” with “circuits” (in the toric ideal meaning). However, since in this paper we are reserving the word “circuit” for its matroidal meaning, we did not want to confuse the reader unnecessarily.
Before moving on, we need to introduce some notation about toric ideals naturally arising in this context.
Let be a simple regular matroid of rank r > 0 on n elements and let M be a full-rank weakly unimodular representation of . Then, by <Ref>(iv)–(v), the lattice points of _M are the columns of and the origin . We will denote by I__M the toric ideal associated with the polytope _M, i.e., the one obtained as the kernel of the map
K[x_1, …, x_n, x_-1, …, x_-n, z] → K[t_1^± 1, …, t_r^± 1, s]
x_i ↦𝐭^M_is
x_-i ↦𝐭^-M_is
z ↦ s
and by I_[M | -M] the toric ideal obtained as the kernel of the map
K[x_1, …, x_n, x_-1, …, x_-n] → K[t_1^± 1, …, t_r^± 1]
x_i ↦𝐭^M_i
x_-i ↦𝐭^-M_i,
where K is a field.
We immediately obtain the following corollary of <Ref>:
Let be a simple regular matroid of rank r > 0 and let M ∈^r × n be a full-rank weakly unimodular matrix representing . Then the ideal I__M is the homogenization of I_[M | -M] with respect to the variable z. In particular, the (irreducible) projective variety V(I__M) is the projective closure of the (irreducible) affine variety V(I_[M | -M]).
<Ref> and <Ref> imply the following description for the universal Gröbner basis of the toric ideal I_[M | -M] when M is a full-rank weakly unimodular matrix not containing any zero column:
Let M ∈^r × n (where 0 < r ≤ n) be a full-rank weakly unimodular matrix without any zero column. Then the set of signed circuits, the universal Gröbner basis and the Graver basis of I_[M | -M] all coincide and consist of the following binomials:
* x_ix_-i - 1 for every i ∈ [n];
* ^η_J()^+ - ^η_J()^- for every J ⊆ [n], ∈(M) such that |η_J()^+| ≥ |η_J()^-|.
(With a slight abuse of notation, we identify those binomials that differ only up to a global sign.)
Let M be as in <Ref> and <Ref>. The Graver basis of contains 37 binomials: 5 of the form x_ix_-i-1, 16 arising from the promotions of x_1x_2x_3x_4-1, 8 from the promotions of x_1x_2-x_5 and another 8 from the promotions of x_3x_4x_5-1. For instance, the promotions of = (1,1,0,0,-1) ∈(M) (corresponding to the binomial x_1x_2 - x_5) give rise to the following eight signed circuits of : x_1x_2-x_5, x_2-x_-1x_5, x_1 - x_-2x_5, x_1x_2x_-5-1, 1 - x_-1x_-2x_5, x_2x_-5-x_-1, x_1x_-5 - x_-2, x_-5 - x_-1x_-2. Technically, the statement of <Ref> only asks for the promotions such that |η_J()^+| ≥ |η_J()^-|; however, when this is not the case, we just take - instead of , and the global count is not affected.
The next proposition, reminiscent of the results in <cit.>, proves the existence of a regular unimodular triangulation for I__ and serves as a first step towards an explicit description. For the correspondence between regular unimodular triangulations and squarefree initial ideals, we refer the reader to <cit.> and <cit.>.
Let be a simple regular matroid of rank r > 0 and let M ∈^r × n be a full-rank weakly unimodular representation of . Denote by S the polynomial ring K[x_1, …, x_n, x_-1, …, x_-n]. Let < be a graded monomial order of S and let <_h be any monomial order of S[z] with the property that _<_hf^h = _<f for every f ∈ S. Then the toric ideal I__M has a squarefree initial ideal with respect to <_h.
A concrete choice for <_h as in <Ref> (and later <Ref>) is any degrevlex order of S[z] such that z <_h v for every variable v in S.
By <Ref>, the toric ideal I__M is the homogenization of I_[M | -M] with respect to the variable z. In order to find a Gröbner basis for I__M, by <cit.> it is then enough to homogenize a set of polynomials forming a Gröbner basis for I_[M | -M] with respect to a graded monomial order. The universal Gröbner basis of I_[M | -M] is described in <Ref>; since by definition signed circuits have coefficients in {0, ± 1}, the claim follows.
We are finally able to generalize the Gröbner basis description obtained by Higashitani, Jochemko and Michałek for the usual symmetric edge polytopes.
Let be a simple regular matroid and let M ∈^r × n be a full-rank weakly unimodular representation of . Let S[z], < and <_h be as in <Ref>. Then the polynomials
(i) x_ix_-i - z^2 for every i ∈ [n];
(ii) ^η_J()^+ - ^η_J()^-z for every J ⊆ [n], ∈(M) such that |η_J()^+| = |η_J()^-|+1;
(iii) ^η_J()^+ - ^η_J()^- for every J ⊆ [n], ∈(M) such that |η_J()^+| = |η_J()^-|
form a Gröbner basis for I__M with respect to <_h.
The binomials of types (ii) and (iii) in <Ref> come from considering those signed circuits of where 1's and -1's are “as balanced as possible”; this is exactly what happens for classical symmetric edge polytopes, recalling that both orientations for every edge of the original undirected graph are available in that setting.
By the proof of <Ref>, we know that homogenizing the polynomials of <Ref> with respect to the variable z yields a Gröbner basis for I__M with respect to <_h.
For every i ∈ [n], the homogenization of x_ix_-i - 1 gives us one of the polynomials of type (i). Now let J ⊆ [n], ∈(M) and set k |η_J()^+| - |η_J()^-|. After possibly swapping with -, we can assume without loss of generality that k ≥ 0.
If k = 0 or k = 1, the homogenization of ^η_J()^+ - ^η_J()^- yields one of the binomials of type (iii) or (ii) in the list. It is then enough to show that the homogenization of ^η_J()^+ - ^η_J()^- is redundant when k ≥ 2. Consider such a polynomial. There exists j ∈ [n] such that either η_J()_j = 1 or η_J()_n+j = 1. If η_J()_j = 1, one has that
x_j ·^η_J ∪{j}()^+ = ^η_J()^+ and x_-j·^η_J()^- = ^η_J ∪{j}()^-
and we can write
^η_J()^+ - ^η_J()^-z^k = ^η_J()^+ - ^η_J()^-z^k + x_jx_-j^η_J()^-z^k-2 - x_jx_-j^η_J()^-z^k-2
= x_j·^η_J∪{j}()^+ - ^η_J()^-z^k + x_jx_-j^η_J()^-z^k-2 - x_j ·^η_J ∪{j}()^-z^k-2
= x_j· (^η_J∪{j}()^+ - ^η_J ∪{j}()^-z^k-2) + ^η_J()^-z^k-2(x_jx_-j-z^2).
If instead η_J()_n+j = 1, one has that
x_-j·^η_J ∖{j}()^+ = ^η_J()^+ and x_j·^η_J()^- = ^η_J ∖{j}()^-
and an analogous computation leads to
^η_J()^+ - ^η_J()^-z^k = x_-j· (^η_J∖{j}()^+ - ^η_J ∖{j}()^-z^k-2) + ^η_J()^-z^k-2(x_jx_-j-z^2).
Iterating this procedure as many times as possible yields the claim.
Let M be as in <Ref> and <Ref>, and pick as a term order the degree reverse lexicographic order with x_1 > x_-1 > x_2 > x_-2 > x_3 > x_-3 > x_4 > x_-4 > x_5 > x_-5 > z. Then, by <Ref>, there is a Gröbner basis for I__M consisting of the following binomials (where we underline the leading term):
* x_1x_-1-z^2, x_2x_-2-z^2, x_3x_-3-z^2, x_4x_-4-z^2, x_5x_-5-z^2
* x_1x_2-x_5z, x_-1x_-2-x_-5z, x_-1x_5-x_2z, x_1x_-5-x_-2z, x_-2x_5-x_1z, x_2x_-5-x_-1z
* x_3x_4-x_-5z, x_-3x_-4-x_5z, x_3x_5-x_-4z, x_-3x_-5-x_4z, x_4x_5-x_-3z, x_-4x_-5-x_3z
* x_1x_2-x_-3x_-4, x_-1x_-2-x_3x_4, x_1x_3-x_-2x_-4, x_-1x_-3-x_2x_4, -x_-2x_-3+x_1x_4, -x_2x_3+x_-1x_-4.
Note that this Gröbner basis is not reduced, as the monomials x_1x_2 and x_-1x_-2 are both featured twice as the leading term of a binomial. The associated triangulation has sixteen facets and is shown in <Ref>.
Finally, the γ-polynomial of a symmetric edge polytope has been the object of much recent work after Ohsugi and Tsuchiya conjectured the nonnegativity of its coefficients in <cit.>. We wish to conclude the present article by extending to the matroidal setting a characterization of γ_1 which appeared independently in <cit.> and <cit.>.
Let be a simple regular matroid of positive rank. Then γ_1(_) = 2 ·rk(^*). In particular, γ_1(_) is nonnegative.
In what follows, let E be the ground set of the matroid . By <Ref>, the polytope _ admits a (regular) unimodular triangulation Δ_<, and hence the h^*-polynomial of _ and the h-polynomial of Δ_< coincide. Then
γ_1(_) = h^∗_1(_)- rk() by <Ref>(ii)
= h_1(Δ_<)-rk() by <Ref>
= (f_0(Δ_<)-rk()) - rk() by definition of h_1
=2 · (|E|-rk()) since Δ_< has 2 · |E| vertices
=2 ·rk(^*).
§ FUTURE DIRECTIONS
We conclude the present paper with some questions.
Are generalized symmetric edge polytopes γ-positive? A positive answer would settle the conjecture by Ohsugi and Tsuchiya on symmetric edge polytopes <cit.>.
More modestly, one could try to prove or disprove that γ_2 is always nonnegative, analogously to the classical symmetric edge polytope case treated in <cit.>.
How do properties of the generalized symmetric edge polytope (e.g., its h^*-vector) change under operations on the associated matroid? Is there any way to use Seymour's characterization of regular matroids via 1-, 2- and 3-sums <cit.>?
Can one determine a formula for the h^*-vector of generalized symmetric edge polytopes analogous to the one found by Kálmán and Tóthmérész in <cit.>?
Are there “nice” classes of regular matroids for which the h^∗-polynomial of the associated generalized symmetric edge polytope is real-rooted?
To which extent can the formulas from <cit.> be generalized to the matroidal setting?
§.§ Acknowledgements
We wish to thank Emanuele Delucchi, Akihiro Higashitani, Hidefumi Ohsugi and Lorenzo Venturello for useful comments and discussions at various stages of this project. We are grateful to Matthias Walter for helping us with using the Combinatorial Matrix Recognition Library (currently available at <http://discopt.github.io/cmr/>); in particular, in our computations we made use of the total unimodularity test described in <cit.>. We also acknowledge the use of the Macaulay2 <cit.> package <cit.> by Justin Chen. Finally, we thank Marco Caselli and Lorenzo Venturello for their help with coding in SageMath <cit.>.
alpha
|
http://arxiv.org/abs/2307.04441v1 | 20230710094718 | Randomized Communication and Implicit Representations for Matrices and Graphs of Small Sign-Rank | [
"Nathaniel Harms",
"Viktor Zamaraev"
] | cs.CC | [
"cs.CC",
"cs.DM",
"cs.DS"
] |
Automatic diagnosis of knee osteoarthritis severity using Swin transformer
Rachid Jennane
==========================================================================
We prove a characterization of the structural conditions on matrices of sign-rank 3 and unit disk
graphs (UDGs) which permit constant-cost public-coin randomized communication protocols.
Therefore, under these conditions, these graphs also admit implicit representations.
The sign-rank of a matrix M ∈^N × N is the smallest rank of a matrix R
such that M_i,j = (R_i,j) for all i,j ∈ [N]; equivalently, it is the smallest
dimension d in which M can be represented as a point-halfspace incidence matrix with halfspaces
through the origin, and it is essentially equivalent to the unbounded-error communication
complexity. Matrices of sign-rank 3 can achieve the maximum possible bounded-error
randomized communication complexity Θ(log N), and meanwhile the existence of implicit
representations for graphs of bounded sign-rank (including UDGs, which have sign-rank 4) has been
open since at least 2003. We prove that matrices of sign-rank 3, and UDGs, have constant
randomized communication complexity if and only if they do not encode arbitrarily large instances of
the Greater-Than communication problem, or, equivalently, if they do not contain large
half-graphs as semi-induced subgraphs. This also establishes the existence of implicit
representations for these graphs under the same conditions.
empty
empty
empty
§ INTRODUCTION
Consider a sign matrix M ∈^N × N. In communication complexity, learning theory,
and graph theory, it is often useful to represent M as a point-halfspace incidence matrix of the
following form. To each row x ∈ [N], assign a point p_x ∈^d ∖{0}, and to each
row y ∈ [N] assign a unit vector h_y ∈^d, such that M(x,y) = (p_x, h_y). In
other words, M(x,y) = 1 if and only if the point p_x belongs to the halfspace H_y { p
∈^d | p,h_y≥ 0} whose boundary hyperplane goes through the origin. It is
always possible to find such a representation, but, naturally, we wish to accomplish it in the
simplest way. Here are two common ways to measure the complexity of this representation:
Sign-rank.
We might want to minimize the dimension d of the representation. The minimum possible d
where M admits such a representation is called the sign-rank of M and denoted
_±(M). It is equivalent to
the smallest rank d of a matrix R such that M(x,y) = (R(x,y)) for all x,y ∈ [N].
Thinking of the rows of M as a fixed domain , and the columns as a hypothesis class (subsets of ), a standard technique in learning theory is to transform the domain into points in
^d, and the hypothesis class into halfspaces; _±(M) is the smallest dimension such that
this transformation is possible. Since halfspaces through the origin in ^d have VC dimension
d, sign-rank is lower bounded by the VC dimension of the hypothesis class. In
communication complexity, sign-rank is essentially equivalent to the unbounded-error
communication complexity of M <cit.>, where the two players have access to private
randomness and wish to succeed with probability strictly better than 1/2. A set of
matrices has bounded sign-rank if there exists a constant d such that all matrices M ∈ have sign-rank at most d. This is equivalent to having constant unbounded-error
communication cost. In graph theory, finding implicit representations (defined below) for
graphs whose adjacency matrices have bounded sign-rank is an open problem since at least 2003
<cit.>.
Margin.
We might want to maximize the margin of the representation. For a fixed representation
{p_x}_x ∈ [N] and {h_y}_y ∈ [N], we define the margin as min_x,y|p_x,h_y|/p_x·h_y. Write (M) for the maximum m such that
there is a representation with margin m; the dimension of this representation is
irrelevant. The complexity of various learning algorithms like SVM or perceptron can be bounded in
terms of the margin. It is also known that (M) is functionally equivalent to the
two-way, public-coin randomized communication complexity (<ref>). A
set of matrices has bounded margin if there is some constant m such that all M ∈ have (M) ≥ m, and having bounded margin is equivalent to having constant
public-coin randomized communication cost. Therefore, graphs whose adjacency matrices have bounded
margin admit implicit representations, due to the observation of <cit.>.
One of the main goals in communication complexity is to understand the power of randomness, and both
of the above measures of complexity capture a type of randomized communication. A rapidly-growing
body of work on constant-cost communication
<cit.> studies
the properties of matrices with bounded margin or bounded sign-rank, but the relationship
between these two measures is not well understood. In one direction, it is believed that there
exist sets of matrices with bounded margin but unbounded sign-rank, but all known lower bounds
fail to prove this <cit.> (although it was proven for partial matrices
<cit.>). In this paper, we are interested in the other direction:
For matrices of bounded sign-rank, under what conditions does also have bounded
margin?[Note that a matrix having bounded sign-rank and bounded margin does not mean that sign-rank
and margin are bounded simultaneously by the same point-halfspace representation.]
It is known that some conditions are required. Write (M) for the two-way, public-coin
randomized communication cost of a matrix M ∈^N × N (which we will refer to simply
as communication cost) and () for the communication cost of matrices M ∈
as a function of their size N (see <ref> for formal
definitions). The Greater-Than communication problem, defined by the matrices ∈^N × N where _i,j = 1 if and only if i > j, has sign-rank 2 but communication
cost[Standard notation in the literature uses n as the number of bits in the input; we
use N for the domain size, so Θ(loglog N) corresponds to the more commonly-stated bound
Θ(log n).]
() = Θ(loglog N) and therefore unbounded margin. When sign-rank increases to 3,
matrices can achieve the maximum possible communication cost () = Θ(log N)
<cit.>, far exceeding the complexity of Greater-Than. However, one of our
main results is that, for sign-rank 3, Greater-Than is the only barrier to constant-cost
communication:
A set of matrices with sign-rank 3 has () = O(1) (and therefore constant margin) if and
only if it does not contain arbitrarily large instances of Greater-Than.
We prove a similar theorem for the adjacency matrices of unit-disk graphs (UDGs), which have
sign-rank 4, and these results establish the existence of implicit representations when the
condition on the Greater-Than instances is satisfied. We also exhibit a
fundamental gap between sign-rank 4 and 5 which shows that the “type” of randomness used in
our communication protocols cannot succeed in sign-rank 5 and above.
<ref> is a consequence of more general results whose motivation and
applications we elaborate upon below.
§.§ Constant-Cost Communication and Implicit Graph Representations
The study of constant-cost randomized communication was initiated independently in
<cit.>. One motivation of <cit.> was that constant-cost communication
is a special case of a well-studied open problem in structural graph theory and distributed
computing, which asks to characterize the hereditary graph classes that admit implicit
representations (see <cit.>).
*Implicit representations.
A class of graphs is a set of (labeled) graphs that is closed under isomorphism. It is
hereditary if it is closed under taking induced subgraphs. A hereditary class admits an
implicit representation if there exists a decoder
D : ^* ×^* → such that, for every N-vertex graph G ∈, each vertex v
of G can be assigned an encoding (v) of O(log N) bits, where D((u),
(v)) outputs the adjacency of vertices u,v; the decoder D depends on the class but
not the specific graph G. Implicit representations were introduced in <cit.>, who
observed that they are equivalent to a graph U of size (N), called a universal
graph, that contains every N-vertex graph G ∈ as an induced subgraph. Since a graph of
size (N) has at most 2^O(N log N) N-vertex induced subgraphs, a necessary condition
for the existence of implicit representations is that contains at most 2^O(N log N)
N-vertex graphs, in which case is said to have factorial speed.
The communication problem defined by any matrix M ∈^N × N is equivalent
to the problem of deciding adjacency in the (bipartite) graph whose adjacency matrix is M, where
each player is given a vertex. Building on <cit.>, <cit.> observed that constant-cost
communication problems are equivalent to hereditary graph classes that admit an
adjacency sketch, which is a randomized version of an implicit representation, where the
encodings (v) are assigned by a randomized algorithm and have constant size
(independent of the number of vertices), in such a way that
∀ u,v : D((u), (v)) correctly outputs adjacency of u,v ≥
2/3 .
Adjacency sketches for trees also appeared earlier in <cit.>. As noted in <cit.>,
adjacency sketches can be derandomized (see <ref>) to obtain
implicit representations, making constant-cost randomized communication protocols a stronger type of
implicit representation.
*Unit disk graphs.
This again motivates our focus on sign-rank. Graphs whose adjacency matrices have bounded sign-rank
are among the most important types of graphs for which implicit representations are not known to
exist in general: to obtain implicit representations for geometric intersection graphs (more precisely,
semi-algebraic graphs), it suffices to study graphs of bounded sign-rank (see <cit.>). Any class of bounded sign-rank satisfies the necessary condition of factorial
speed <cit.>, which was conjectured to be sufficient in <cit.>. Until this
conjecture was refuted in <cit.> by a non-constructive argument, classes of bounded sign-rank
were considered promising candidates for a counterexample <cit.>. The best
known implicit representations for classes of bounded sign-rank in general use O(N^1-ϵ)
bits per vertex where ϵ > 0 is a constant <cit.>.
A canonical example is the unit disk graphs (UDGs). UDGs admit an “implicit representation” in
the sense that each vertex may be encoded with the coordinates of its disk in ^2. However, this
encoding requires exponentially-many bits <cit.>, and it is a central open problem whether this
difficulty can be sidestepped to obtain encodings of size O(log N); our understanding is that
this is not widely believed to be possible. In this paper, we resolve the randomized version
of the question by giving a complete characterization of the UDGs which admit constant-size
adjacency sketches. To state this result, we require the notion of stability (see <cit.>).
*Stability. The chain-index (G) of a graph G is the largest k
such that there exist disjoint sets of vertices {a_1, …, a_k} and {b_1, …, b_k}
where, for any i < j, a_i, b_j are adjacent but b_i, a_j are not. In the
terminology of <cit.>, a graph class is graph-theoretically stable if there is a
constant k such that (G) ≤ k for all G ∈; we will say simply
stable[We use stable in this paper but we note that the disambiguation
graph-theoretically stable in <cit.> is necessary to avoid confusion with stability in
the literature on model theory.].
The chain-index is essentially[Not exactly: we have no restriction on the adjacency between
a_i, b_i, which
helps the analysis but is not qualitatively important.] the largest instance of the
Greater-Than communication problem that appears in G, and therefore a class that is
not stable must have non-constant communication cost (see <cit.> for more
on the stability condition in communication).
For a graph class , write
() for the function N ↦max_G (_G) where G ranges over the N-vertex graphs
in and _G is the adjacency matrix of G (if is a class of bipartite graphs, we
take the bipartite adjacency matrix). Stability is necessary for () = O(1); for UDGs and
graphs of sign-rank 3, we show it is also sufficient:
Let be either a subclass of UDGs, or a class of sign-rank at most 3. Then () = O(1) if
and only if is stable. As a consequence, stable subclasses of UDGs and graphs of sign-rank 3
admit implicit representations.
§.§ Results and Techniques
<ref> follows from a more general result that has other implications for
implicit graph representations and which unifies and generalizes a number of previous results. We
also complement it with an impossibility result that rules out using the type of randomized
techniques in this paper to prove similar results in sign-rank 5 and above. Let us now explain these
results in more detail and give a brief summary of the techniques.
*Constant-cost reductions.
We require the notion of constant-cost reductions and the Equality oracle. The
Equality communication problem is the standard example of the power of (public-coin)
randomized communication. Two players are given inputs x, y ∈ [N], respectively, and they must
decide if x = y. By random hashing, this can be done with success probability 3/4 using only 2
bits of communication. The success probability can be improved to any arbitrary constant
by increasing the number of bits by a constant factor.
One way to design a constant-cost communication protocol is to design a deterministic
communication protocol with constant cost, which has access to an oracle that computes
Equality. This means that the two players can, at any time, supply the oracle with
arbitrary values a,b and receive, at unit cost, the answer to the query “a = b?”
The power of the Equality oracle has been studied in several works
<cit.>.
One may think of these protocols as the ones that can be implemented using standard practical hash
functions like SHA256. Constant-cost protocols of this form are examples of constant-cost
reductions, a type of reduction that is natural for both constant-cost communication complexity and
implicit graph representations; we formally define constant-cost reductions in general in
<ref>. Along with the algorithmic definition of reductions to Equality,
there is an equivalent structural definition (see <cit.>): if a graph class
admits a constant-cost protocol for computing adjacency in graphs G ∈, using
Equality oracles, then there exists a constant t such that the adjacency matrix _G
of every graph G ∈ (or bipartite adjacency matrix, if is a class of bipartite graphs)
can be written as
∀ x,y : _G(x,y) = f(Q_1(x,y), Q_2(x,y), …, Q_t(x,y)) ,
where f : ^t → and each Q_i is the bipartite adjacency matrix of a bipartite
equivalence graph (disjoint union of bicliques). We write ( M ) for the minimum cost of
a 2-way deterministic protocol with Equality oracles. For computing adjacency in
monotone graph classes (closed under edge & vertex deletions), all constant-cost randomized
protocols can be put in this form <cit.>, but in general they cannot
<cit.>. <cit.> showed that () = O(1) implies that has
bounded sign-rank; our results explore the converse.
*Forbidden cycles and subdivided stars.
Our <ref> is a consequence of a more general result, <ref>
below, which also makes some progress towards characterizing the finitely-defined bipartite
graph classes for which constant-cost communication and implicit representations are possible. For
any set of bipartite graphs, a class of bipartite graphs is -free if no
graph G ∈ contains any H ∈ as an induced subgraph. Every hereditary class of
bipartite graphs is -free for some unique but possibly infinite set . For fixed ,
write _ for the -free bipartite graphs.
For a bipartite graph G=(U,W,E) with a fixed bipartition, we write G for the bipartite complement of G, i.e. G=(U,W,(U × W) ∖ E).
The condition that is stable is equivalent to the condition that it is H_k-free for
some constant k, where H_k denotes the half-graph (see <cit.>), so (_) =
O(1) requires that contain some half-graph H_k. When is finite, it is also
necessary that contain both a tree and the bipartite complement of a tree,
otherwise the number of graphs in _ is too large <cit.>. In the case || = 2, it is
therefore necessary for (_) = O(1) that = {H_k, T} where T and its bipartite
complement T are both trees; it was proved in <cit.> that this is also
sufficient. We believe these conditions remain sufficient for larger (but still
finite) , (_) = O(1) whenever = { H_k, T_1, T_2} for some trees T_1 and
T_2. When T_1 and T_2 are subdivided stars, our result confirms this.
For s,t ∈, we write S_s,t for the
subdivided star, which is obtained by taking the star graph with s leaves and subdividing
each edge t-1 times.
As usual, we denote by C_t the cycle on t vertices.
Our main technical result is:
theoremthmintromain
Let be a stable class of bipartite graphs that satisfies either of these conditions:
* There exist constants s, t such that is (S_s,t, S_s, t)-free;
or
* There exists a constant t such that is { C_t', C_t' | t' ≥ t and t' is even}-free.
Then () = O(1).
We use <ref> to prove <ref> by decomposing UDGs or graphs of
sign-rank 3 into bipartite graphs that are both (S_3, 3, S_3, 3)-free and { C_t,
C_t | t ≥ 10 and t is even}-free (which, to clarify, is stronger
than necessary to apply the theorem). We remark that the implicit representation implied by
<ref> can be efficiently computed, meaning that the labels can be constructed in
time (N) and decoded in time log N. This efficiency is inherited by the implicit
representations of UDGs and graphs of sign-rank 3, provided that the encoder is given the geometric
representation of the input graph.
<ref> is much more general, and also allows us to
recover several prior results. Analogs of <ref> for the classes
of permutation graphs, interval graphs, and P_7-free and S_1,2,3-free bipartite graphs were
proved in <cit.>. All of these results, which in <cit.> each required different proof
strategies, follow as corollaries of <ref>. Likewise,
<cit.> showed the existence of implicit representations for stable, chordal bipartite
graphs, which is also implied by <ref>.
*Higher sign-ranks and weakly-sparse graphs.
To advance beyond sign-rank 3, it is helpful to compare the stability condition with the
stronger weakly-sparse condition. A class of graphs is weakly-sparse if there is
a constant t such that no graph G ∈ contains K_t,t as a subgraph. Any weakly-sparse
class is also stable. It is known and not difficult to prove that any weakly-sparse subclass of
UDGs has bounded degeneracy, and therefore the analog of <ref> for weakly-sparse
UDGs is trivial (because () = O(1) for any of bounded degeneracy). For weakly-sparse
graph classes, we present a proof in <ref> that reductions to Equality
are equivalent to bounded degeneracy:
theoremthmeqlowerbound
Let be a hereditary class of bipartite graphs that is weakly-sparse. Then () = O(1)
if and only if has bounded degeneracy.
In <cit.>, it is conjectured that the point-line incidence graphs 𝒫ℒ satisfy
(𝒫ℒ) = ω(1). <ref> shows the weaker result () =
ω(1), because point-line incidences are K_2,2-free and have unbounded degeneracy. They
also have sign-rank at most 6, which means that the Equality oracle does not suffice to
extend <ref> to sign-rank 6 and above, even if the stability condition
is replaced with the much stronger weakly-sparse condition. Combining known results in the
literature, we also give in <ref> an example (K_2,2-free point-box incidence
graphs) with sign-rank 5 that is K_2,2-free but has unbounded degeneracy, showing in fact that
the Equality oracle does not suffice to extend <ref> to sign-rank 5.
It may be the case that reductions to Equality are the only type of constant-cost
communication possible for matrices of bounded sign-rank, see <ref>.
We summarize the known results for low sign-ranks in <ref>.
*Proof overview.
We briefly summarize the proofs of <ref>. Although UDGs and
graphs of sign-rank 3 do not satisfy the conditions of <ref>, we prove that two
parties with access to an Equality oracle can agree on a graph decomposition into pieces
that avoid edge-asteroid triple structures (used in <cit.>), which
guarantees that these pieces satisfy the conditions of <ref>.
Our main tool to prove <ref> is the decomposition, which we take
from <cit.>. The decomposition partitions a bipartite graph into bags of vertices
with a tree-like structure on the bags that controls the edges between the bags. In particular,
every root-to-leaf path on the bags induces a path in the original graph. For this reason, the
method has previously been used (as in <cit.>) to analyze P_t-free graphs, graphs which forbid long induced paths, where the depth of the decomposition is constant.
However, in our case, the depth of the decomposition is unbounded. Instead, we show that, under the
conditions of <ref>, each bag has edges to only a bounded number of its ancestors.
Using this guarantee, we show that a communication protocol on input vertices x,y may use the
Equality oracle to either determine the adjacency, or agree on a subset of bags that
contains x and y. The protocol may then recurse on these bags, sometimes switching to the
bipartite complement of the graph when it does so (this is why we require both S_s, t
and S_s, t to be forbidden). Due to arguments of <cit.>, this recursion will
reduce the chain-index of the graph and is therefore guaranteed to terminate after a constant number
of iterations.
§.§ Discussion and Open Problems
*Communication complexity.
An intriguing possibility arises from this work, in conjunction with other
recent work on bounded sign-rank. Adapting (or abusing) some notation of <cit.>, write
𝖴𝖯𝖯[1] for the set of communication problems with bounded sign-rank (constant
unbounded-error communication cost <cit.>), write 𝖡𝖯𝖯[1] for the set of
communication problems with constant public-coin randomized communication cost, and write [1]
for the set of communication problems with a constant-cost reduction to Equality. With
these definitions of communication complexity classes, we can ask:
Is it the case that [1] = 𝖴𝖯𝖯[1] ∩𝖡𝖯𝖯[1]?
A positive answer to this question would “explain” all of the known results and conjectures
relating these classes. It is proved in <cit.> that [1] ⊆𝖴𝖯𝖯[1] ∩𝖡𝖯𝖯[1]. In the other direction, there are communication problems in 𝖡𝖯𝖯[1] that
do not belong to [1], which was proved independently in <cit.> and <cit.>,
but the example in both cases, the 1-Hamming Distance problem (adjacency in the
hypercube), is believed not to belong to 𝖴𝖯𝖯[1] <cit.>, which is implied by a
positive answer to <ref>. In <ref>, we give two explicit examples
(K_2,2-free point-box incidences, and point-line incidences) in 𝖴𝖯𝖯[1] that do not
belong to [1], which could possibly provide a negative answer to <ref>
if they belong to 𝖡𝖯𝖯[1], but point-line incidences are conjectured not to belong to
𝖡𝖯𝖯[1] in <cit.>. On the other hand, a negative answer to
<ref> seems to require a substantially different type of randomized protocol
than the ones which have so far been discovered[By this we mean that it seems unlikely to
us that a negative answer to the question would be achieved by a reduction to any currently-known
constant-cost problem, most of which can be found in <cit.>.], and would therefore be very
interesting.
*Implicit representations.
An obvious question is whether the stability condition in our positive result for implicit
representations can be dropped. This cannot be accomplished by reductions to Equality, for
which stability is necessary. We have shown that the Greater-Than problem is the only
barrier to constant-cost communication, so one idea for generalizing our result is to allow the more
powerful Greater-Than oracles in the communication protocol. Constant-cost reductions to
Greater-Than are equally good for the purpose of finding implicit representations (we may
think of some standard implicit representations, like for interval graphs <cit.> and point-box
incidences <cit.>, as protocols of this form). But this cannot succeed: a
constant-cost reduction to Greater-Than for graphs of sign-rank 3 would imply
() = Θ(loglog N) which contradicts the known bound of Θ(log N)
<cit.>. This answers an open
question asked in independent and concurrent work <cit.> whether (in our terminology)
reductions to Greater-Than suffice to obtain implicit representations for geometric
intersection graphs with small sign-rank realized by integer coordinates[The bounds in
<cit.> hold for constructions with integer coordinates.].
This at least demonstrates
that communication complexity lower bounds can be used against certain natural types of implicit
representation, although it remains open how to prove any explicit, non-trivial lower bounds
for implicit representations.
§ PRELIMINARIES
Let us define some notation and formalize the notions we have discussed in the introduction. We
intend this paper to be accessible to readers in graph theory or communication complexity who may
not have a background in both, so we make an attempt to make the terminology explicit. We will also
define a general notion of constant-cost reductions which has not yet appeared explicitly
in the literature.
§.§ Notation
For a matrix M ∈^X × Y, row x ∈ X, and column y ∈ Y, we will write either
M_x,y or M(x,y) for the entry at x and y.
For a graph G, we write G for the complement of G. For a bipartite graph G =
(X,Y,E) with a fixed bipartition, write G for the bipartite complement, which has edge
xy if and only if xy is not an edge of G. The adjacency matrix of a graph G = (V,E) is the
matrix _G ∈^V × V with _G(x,y) = 1 if and only if xy ∈ E. For a
bipartite graph G = (X,Y,E) with a fixed bipartition, the bipartite adjacency matrix is the
matrix _G ∈^X × Y with _G(x,y) = 1 iff xy ∈ E, where we note that
the rows are indexed by X instead of the full set of vertices X ∪ Y (and similar for the
columns).
For a graph G and disjoint sets X,Y ⊆ V(G), we will write G[X,Y] for the
semi-induced bipartite subgraph, which is the bipartite graph G[X,Y] = (X,Y,E) defined by
putting an edge between x ∈ X and y ∈ Y if and only if xy are adjacent in G. (In
particular, any edges within X or Y in G are not present in G[X,Y].)
§.§ Sign-Rank
For a matrix M ∈^N × N, the sign-rank of M is denoted _±(M) and
it is the minimum d ∈ such that there exists a matrix R ∈^N × N of rank d
with M = (R), where (R) ∈^N × N is the matrix with entries
∀ i,j ∈ [N] : (R)_i,j = (R_i,j) .
Equivalently, _±(M) is the minimum d such that each row i ∈ [N] may be associated
with a unit vector p_i ∈^d (which we think of as a point) and each column j ∈ [N] may be
associated with a unit vector h_j ∈^d (which we think of as the normal vector for a
halfspace), such that M_i,j = (p_i, h_j). In this way, the sign-rank of M is
equivalent to the minimum dimension d such that M is the incidence matrix between a set of
points X and a set of halfspaces Y, where the hyperplane boundaries of the halfspaces contain
the origin.
We require a notion of sign-rank for graphs, which we will define separately for bipartite graphs
with a fixed bipartition, and for general graphs. For a bipartite graph G = (X,Y,E) with a fixed
bipartition, its sign-rank _±(G) is defined as the sign-rank _±(_G) of its
bipartite adjacency matrix _G ∈^X × Y. For a general graph G = (V,E), we
define its partial adjacency matrix A ∈{± 1, ⋆}^V × V to be
_G^*(x,y) ⋆ if x = y
1 if xy ∈ E
-1 otherwise.
We then define the sign-rank _±(G) as the minimum rank of a matrix R such that
∀ i ≠ j : (R_i,j) = _G^*(i,j) .
Specifically, we do not make any requirement on the diagonal entries.
§.§ Communication Complexity and Margin
For a matrix M ∈^N × N, we will write (M) for the public-coin randomized
communication complexity of M, with success probability 2/3. In this model, Alice receives a
row x ∈ [N] and Bob receives a column y ∈ [N] and they must output M(x,y). They are given
shared access to a string of random bits, and they take turns sending messages that depend on
their respective inputs and the random string. They must output the correct answer with probability
at least 2/3 over the random string, and the complexity of a protocol is the total number of bits
communicated between the players on the worst-case inputs x,y. (M) is the minimum complexity
of any such protocol computing M.
See <cit.>.
The standard notion of a (total, Boolean-valued) communication problem is a sequence =
(P_N)_N ∈ of matrices, where P_N ∈^N × N, and the complexity of the
problem, denoted (), is the function N ↦(P_N). However, we are interested in the
complexity of classes of matrices (specifically adjacency matrices of graphs belonging to
some graph class), not merely sequences of matrices, where there is a variety of N × N
matrices instead of just one. So we define communication problems more generally, as in
<cit.>.
A communication problem is a set = ⋃_N ∈_N of Boolean matrices, where
_N is a finite set of matrices in ^N × N. We then define the communication complexity
() as the function
N ↦max_P ∈_N(P) .
For a class of graphs, we write _ for the communication problem that is the set of
adjacency matrices of graphs in . If is a class of bipartite graphs, we take the
bipartite adjacency matrices. We abuse notation and write () = (_), so that
() is the function
N ↦max{ R(_G) | G ∈ has N vertices } .
Communication complexity is always upper bounded by the number of bits n in the input, or in our
notation, by ⌈log N ⌉. We are interested in determining which communication problems
have constant cost, which means that there exists a constant c such that (M) ≤ c for
all M ∈. One way to rule out a constant-cost protocol for a problem is if the
Greater-Than communication problem appears as a subproblem of . Formally, this is
captured by the stability condition (see <cit.>):
Let be any graph class which is not stable. Then () = ω(1).
As mentioned in the introduction, having constant communication cost is equivalent to having
constant margin, due to the following inequality, which follows from results of <cit.>:
Let M ∈^N × N. Then
Ω(log1/(M)) ≤(M) ≤ O(1/(M)^2) .
§.§ Constant-Cost Communication Reductions and Equality
One way to obtain constant-cost protocols is by reduction to the Equality problem, for
which we require the definitions of the Equality problem and a notion of reduction.
The Equality communication problem is the set { I_N × N : N ∈} where I_N × N denotes the N × N identity matrix.
In other words, for input size N, Alice and Bob receive elements x,y ∈ [N] and wish to decide
whether x = y. It is well-known that () = 2.
Constant-cost communication reductions, specifically to the Equality problem, have been
used implicitly in several prior works. Here we choose to explicitly define constant-cost
reductions in general[This general definition of constant-cost reductions has arisen out
discussions with several other researchers.]. For this, we require the notion of a query set.
A query set is a set of matrices that is closed under the following operations:
* For every Q ∈ and any Q' obtained by row and column permutations of Q, Q' ∈.
* For every Q ∈, if Q' is any submatrix of Q then Q' ∈.
* For every Q ∈, if Q' is obtained by duplicating a row or a column of Q, then Q'
∈.
For a set of matrices, we define () to be the closure of under these operations.
In the communication complexity literature, () was recently named the set of blocky
matrices <cit.>. In graph theory, () are the adjacency matrices of
disjoint unions of bicliques, also called bipartite equivalence graphs. It is easily verified
that for any constant c, if () ≤ c then (()) ≤ c. However, we caution that
(()) ≤() does not hold for non-constant complexities, because ()
includes all submatrices of and (·) takes the maximum complexity over all
size-N matrices (see <cit.> for examples).
We now give two equivalent definitions for reductions between problems; one algorithmic and one
structural.
Let be a communication problem and let P ∈^N × N. A deterministic
protocol computing P with oracles is a rooted binary tree T where each leaf ℓ is
assigned a value b(ℓ) ∈ and inner node v is assigned an N × N matrix Q_v ∈(), with the following conditions. On each pair of inputs x,y ∈ [N] the protocol begins
at the root node v of T. At each node v, if Q_v(x,y) = -1 then the protocol proceeds by
advancing the current node v to its left child, and if Q_v(x,y) = 1 then the protocol proceeds
by advancing the current node v to its right child, until v becomes a leaf, at which point the
protocol outputs b(v). It is required that b(v) = P_x,y for all inputs x,y.
The cost of the protocol is the depth of the tree. We write ^(P) for the minimum
cost of a protocol which computes P with oracles. For a communication problem , we write ^() for
the function
N ↦max_P ∈_N^(P).
In other words, a communication protocol with oracles is a deterministic protocol where
in each round, Alice and Bob transform their inputs x,y into inputs to a problem in and
receive the answer from an oracle computing at unit cost. Observe that, as long as is
non-trivial (does not contain only all-1 and all-(-1) matrices), the definition of
() allows any single round of deterministic communication to be simulated by an oracle, so
without loss of generality we may assume that every inner node of the protocol is an oracle call.
If there is a constant c such that ^() ≤ c, then we say that
constant-cost reduces (or just reduces) to . The following proposition is easily
obtained by standard error-boosting techniques:
Suppose () = O(1) and reduces to . Then () = O(1). In particular, if
reduces to then () = O(1).
The second, structural definition of reduction is as follows. We say reduces to if there
exists a constant t such that, for every A ∈, there exists:
* a function f : ^t →; and
* matrices Q_1, …, Q_t ∈(),
such that A = f(Q_1, …, Q_t), meaning that A(i,j) = f(Q_1(i,j), Q_2(i,j), …, Q_t(i,j))
for all i,j ∈ [N]. In the special case when is the set of identity matrices, this
definition appeared independently in <cit.> and subsequently in <cit.>, and
the minimum t such that the above conditions hold is a “functional” analog of rank, recently
called the functional blocky-rank in <cit.>. It is not difficult to show that this
structural definition of constant-cost reductions is equivalent to the algorithmic one. One may
easily derive a constant-cost protocol with oracles Q_i from the structural definition, and in the
other direction one may simply let the set of matrices Q_i be the inner nodes of the communication
protocol and define f as the function that simulates the protocol on these queries. In the
structural definition it is not hard to see an analog of <ref> for
implicit representations. A similar[There are some technicalities involved in translating
between the two.] notion of reductions for implicit representations appeared independently and
concurrently in <cit.>, which included reductions to Equality and
Greater-Than as parts of a complexity hierarchy of implicit representations.
Suppose is the set of adjacency matrices for a hereditary graph class that admits an implicit
representation, and suppose is the set of adjacency matrices for a hereditary graph class
. If reduces to then admits an implicit representation.
§.§ From Communication Protocols to Implicit Representations
An observation of <cit.> is that any hereditary graph class for which () =
O(1) must also have an implicit representation (and any constant-cost communication problem may be
transformed into a hereditary graph class). Therefore, as argued in <cit.>, constant-cost
communication is essentially the probabilistic version of implicit representations.
We will present our proofs as upper bounds on communication complexity, which imply implicit
representations. The general correspondence between constant-cost communication and implicit
representations is non-constructive (by the probabilistic method), but for the sake of clarity and
completeness, we briefly describe how to directly translate a communication protocol that uses
Equality oracles (as ours will do) into an implicit representation.
Recall that, for a graph G = (V,E), if (G) ≤ c then there exists a binary communication
tree of depth c with each inner node v assigned to a matrix Q_v ∈(), which means that
Q_v is the adjacency matrix of a bipartite equivalence graph. In other words, there are functions
a_v, b_v : V → [N] such that
Q_v(x,y) =
1 if a_v(x) = b_v(y)
0 otherwise.
To obtain an implicit representation, we need to define a decoder D and encodings (·)
for each graph G ∈. We define (x) for each x ∈ V by writing down the values
a_v(x), b_v(y) for each inner node v of the tree, together with the output values at the leaves
of the tree. Each value a_v(x) and b_v(x) requires at most ⌈log N ⌉ bits, and
there are at most 2^c nodes in the tree, which is constant, so the size of the encoding is O(log
N). The decoder D, on inputs (x) and (y) for x,y ∈ V, may use the values of
a_v(x) and b_v(y) for each node v, together with the outputs on the leaves, to simulate the
communication protocol.
§ COMMUNICATION BOUNDS FOR EXCLUDED CYCLES AND SUBDIVIDED STARS
Our results for unit disk graphs and matrices of sign-rank 3 will follow from a more general result
on bipartite graphs excluding either long cycles or subdivided stars, which we prove in this
section. Recall the definition of the subdivided star S_s, t, <ref>.
*
Our main tool will be the decomposition, which we borrow from <cit.>, defined
below.
§.§ Decomposition: Definition, Existence, and Properties
The following definition of the decomposition is taken from <cit.>. We will only
apply the decomposition to bipartite graphs in this paper, so we state the special case of
the decomposition for bipartite graphs. See <ref> for an illustration.
A decomposition of a connected bipartite graph G is a rooted tree Y satisfying the
following properties:
* Each node of Y is a subset of V(G), called a bag, and the nodes of Y form a
partition of V(G). For each vertex v ∈ V(G), write _Y(v) for the unique bag in Y that
contains v. We will drop the subscript Y when the decomposition is clear from context.
* The root bag of Y is a singleton containing the root vertex.
* If u,v ∈ V(G) are adjacent then (u) is an ancestor of (v) or vice-versa.
* For every bag B of Y, the subgraph of G induced by B together with all of its
descendents is connected.
* For every non-root bag B of Y, there exists a vertex h(B), called the hook of
B, which belongs to the parent bag of B and has the property that h(B) is adjacent to
all vertices of B and non-adjacent to all vertices in the strict descendents of B.
For each bag B, we write (B) for the length of the path from the root bag to B in Y
(where the depth of the root bag is 0). For each ℓ∈, we say that level ℓ of
Y is the set of all bags B with (B) = ℓ. A decomposition for a disconnected
bipartite graph G is the union of decompositions for its connected components.
There is a simple algorithmic proof that such decompositions always exist <cit.>.
For every connected bipartite graph G and vertex r ∈ V(G), there exists a decomposition of G with root vertex r. Given G and r, this decomposition can be computed in
polynomial time.
A path P = (v_0, v_1, v_2, …, v_k) in G is a hook path (with respect to Y) if v_i is the hook
of (v_i-1) for every i ∈ [k].
Observe that any hook path with respect to a decomposition is an induced path.
decompositions are typically used in the case where some induced path P_t is forbidden,
in which case the depth of the decomposition is bounded. In our case, we will not necessarily have a forbidden
P_t or bounded depth of the decomposition, but we will see that the decomposition has a different
structure that will permit efficient communication protocols. For this we define the notion of
back degree.
Given a decomposition Y of G. We say that a bag B of Y has an edge to another bag
B' in Y if there exist a vertex in B and a vertex in B' that are adjacent. The
back-degree of a bag B in Y is the number of ancestor bags of B to which B has an
edge. The maximum back-degree of Y is the maximum back-degree of any of its bags.
Note that decomposition of a P_t-free graph has depth at most t, and therefore the
maximum back-degree of the decomposition is also bounded by t.
In the next two sections we show that if a graph has bounded chain-index and either
* does not contain long induced cycles (<ref>), or
* does not contain a fixed subdivision of a star (<ref>),
then its decompositions have bounded maximum back-degree. In
<ref>, we give a general communication protocol for decompositions with bounded maximum back-degree.
Before proceeding <ref> and <ref>, we introduce some notation and
properties of the interactions between bags in decompositions that are used in both
sections. Let Y be a decomposition of a bipartite graph G = (X,Y,E), and let B be a
bag of Y with (B) > 0. Write h for the hook of B. Let A_1, A_2, …, A_r be some
ancestors of B, excluding the immediate parent of B, to which B has an edge. Then the
following properties are easy to verify:
Let s ∈ and suppose that (A_1) < (A_2) < … < (A_r) and
(A_i+1) - (A_i) ≥ s for all i ∈ [r-1]. For i ∈ [r], we define
h_i,1 to be the hook of A_i, and for z ∈ [s-1], inductively define h_i,z as the hook
of (h_i,z-1). For each i ∈ [r], let a_i ∈ A_i be a neighbour of some b_i ∈ B.
Then the following properties hold:
* The hook h of B is adjacent to each b_i. For each i ∈ [r] and z ≥ 1, a_i is
not adjacent to h, because they are on the same side of the bipartition of G, and h_i,z is
not adjacent to h, because h_i,z is a hook in an ancestor bag of (h) that is not the
parent of (h).
* For each i,j ∈ [r], a_i is not adjacent to a_j, because they are on the same side of the bipartition of G.
* For each 1 ≤ i < j ≤ r and each z ≥ 1, h_i,z is not adjacent to a_j, because h_i,z is a hook that is not in the parent bag of A_j.
* For each i ∈ [r] and each z ≥ 2, we have h_i,z not adjacent to a_i because h_i,z is a
hook that is not in the parent bag of (a_i).
* For each i,j ∈ [r], and z ≥ 1, h_i,z is not adjacent to b_j because h_i,z is a hook
that is not in the parent bag of B.
§.§ Excluding Long Cycles
For any t, k ∈, there exists a constant ℓ such that the following holds.
Let G = (X,Y,E) be any (C_t, C_t+1, C_t+2, …)-free bipartite graph with (G) < k.
Let Y be a decomposition of G. Then Y has maximum back-degree at most ℓ.
Without loss of generality we assume that t ≥ 4.
Let R be the Ramsey number that guarantees that a complete graph on R vertices with edges
colored by 2^t-4 colors has a monochromatic clique of size r max{ 2, k }.
Let ℓ = (t-3) · R and let B be a bag of Y. If B has depth at most ℓ in Y the result
holds trivially, so we will assume that B has depth greater than ℓ. Let A”_1, A”_2, …,
A”_m be the ancestors of B, excluding the immediate parent of B, to which B has an edge.
For each i ∈ [m], let a”_i ∈ A”_i be a neighbour of some b”_i ∈ B.
Assume for the sake of contradiction that B has edges to more than ℓ ancestors, so m ≥ℓ.
Then there is a subsequence of ancestor bags A'_1, …, A'_R such that
(A'_i+1) - (A'_i) ≥ t-3, for each i ∈ [R-1], so that there are at least t-4 levels of the decomposition separating each bag in this subsequence. We will write a'_i a”_i^* and
b'_i b”_i^*, where i^* is the index of the bag satisfying A”_i^* = A'_i.
For each A'_i, let h'_i,1 be the hook of A'_i, and for 1 < z ≤ t-3, inductively define
h'_i,z as the hook of (h'_i,z-1).
For each pair { A'_i, A'_j } with i < j we assign a color 𝖼𝗈𝗅{A'_i, A'_j}∈^t-4 as follows:
the z^th bit is 1 if and only if a_i' is adjacent to h_j,z', for z ∈ [t-4].
By Ramsey's theorem, we may now choose a subsequence A_1, …, A_r of ancestor bags, where for
each i ∈ [r] there is a corresponding i^* ∈ [R] such that A_i = A'_i^*, and each
pair {A_i, A_j} with 1 ≤ i < j ≤ r has the same color. We will now obtain a contradiction for each
possibility of this color. We will write a_i a'_i^*, b_i b'_i^*, and
h_i,z h'_i^*, z, for each i ∈ [r] and z ∈ [t-3], and use the notation and the properties from <ref>.
Case 1: There is z ∈ [t-4] such that the z^th bit of the color is 1.
Consider the subgraph H induced by the vertices
{a_1, a_2, …, a_r}∪{ h_1,z, h_2,z, …, h_r,z}. For 1 ≤ i < j ≤ r, we have a_i
adjacent to h_j,z due to the color, and h_i,z is not adjacent to a_j due to Property <ref>. Thus, by definition, (G) ≥(H) = r ≥ k, a contradiction.
Case 2: All bits of the color are 0. Consider the hook path P from a_2 to h_1,1.
Let v be the first (i.e. closest to a_2) vertex on P that is adjacent to a_1. Such a vertex exists because a_1 is adjacent to the last vertex h_1,1 of the path.
Let P' be the subpath of P from a_2 to v.
By Property <ref> and the color assumption, P' contains the first t-2 vertices of P:
a_2, h_2,1, h_2,2, …, h_2,t-4, h_2,t-3.
Now, if b_2 is adjacent to a_1, then b_2,P',a_1,b_2 is an induced cycle of length at least t.
Similarly, if b_1 is adjacent to a_2, then b_1, P', a_1, b_1 is such a cycle.
Finally, if neither b_2 is adjacent to a_1, nor b_1 is adjacent to a_2, then b_1 ≠ b_2 and,
by Properties <ref> and <ref>, h,b_2,P',a_1,b_1,h is a forbidden induced cycle.
§.§ Excluding Subdivisions of Stars
For any s, t, k ∈, there exists a constant ℓ such that the following holds. Let G =
(X,Y,E) be any bipartite graph with (G) < k that does not contain S_s, t as an
induced subgraph. Let Y be a decomposition of G. Then Y has maximum back-degree at
most ℓ.
Let R be the Ramsey number that guarantees that a complete graph on R vertices with edges
colored by 2^3+(t-1) colors has a monochromatic clique of size r max{ s, k }.
Let ℓ = t · R + 1 and let B be a bag of Y. If B has depth at most ℓ in Y the result
holds trivially, so we will assume that B has depth greater than ℓ. Let A”_1, A”_2, …,
A”_m be the ancestors of B, excluding the immediate parent of B, to which B has an edge,
meaning that for each i ∈ [m], there exists a vertex b”_i ∈ B with an edge to a vertex a”_i
∈ A”_i.
Assume for the sake of contradiction that B has edges to more than ℓ ancestors, so m ≥ℓ.
Then there is a subsequence of ancestor bags A'_1, …, A'_R such that (A'_1) ≥ t and (A'_i+1) - (A'_i) ≥ t, for each i ∈ [R-1]; in particular there are at least t-1 levels of the Gyárfás decomposition separating each bag in this subsequence. We will write a'_i a”_i^* and
b'_i b”_i^*, where i^* is the index of the bag satisfying A”_i^* = A'_i.
For each A'_i, let h'_i,1 be the hook of A'_i, and for 1 < z ≤ t-1, inductively define
h'_i,z as the hook of (h'_i,z-1).
For each pair { A'_i, A'_j } with i < j we assign a color 𝖼𝗈𝗅{A'_i, A'_j}∈^3+(t-1) as follows:
* The first bit indicates whether b'_i = b'_j (i.e. set the bit to 1 if b'_i = b'_j and 0
otherwise).
* The second bit indicates whether b'_i is adjacent to a'_j.
* The third bit indicates whether b'_j is adjacent to a'_i.
* The remaining t bits indicates whether a'_i is adjacent to h'_j,z, for z ∈ [t-1].
By Ramsey's theorem, we may now choose a subsequence A_1, …, A_r of ancestor bags, where for
each i ∈ [r] there is a corresponding i^* ∈ [R] such that A_i = A'_i^*, and each
pair {A_i, A_j} with i < j has the same color. We will now obtain a contradiction for each
possibility of this color. We will write a_i a'_i^*, b_i b'_i^*, and
h_i,z h'_i^*, z, for each i ∈ [r] and z ∈ [t-1], and use the notation and the properties from <ref>.
Case 1: There is z ∈ [t-1] such that the (3+z)^th bit of the color is 1. The argument is exactly as in Case 1 of <ref>.
Consider the subgraph H induced by the vertices
{a_1, a_2, …, a_r}∪{ h_1,z, h_2,z, …, h_r,z}. For 1 ≤ i < j ≤ r, we have a_i
adjacent to h_j,z due to the color, and h_i,z is not adjacent to a_j due to Property <ref>. Thus, by definition, (G) ≥(H) = r ≥ k, a contradiction.
Case 2: The first bit or second bit of the color is 1, and the (3+z)^th color is 0 for
all z ∈ [t-1].
Consider the subgraph induced by the vertices
{b_1}∪⋃_i=1^s{a_i, h_i,1, h_i,2, …, h_i,t-1} .
Since each A_i is separated by at least t-1 levels of Y, each of the above named vertices are
distinct. If the first bit of the color is 1, then we have b_1 adjacent to each a_i by
definition, since b_1 = b_2 = … = b_s. If the first bit of the color is 0 but the second bit
of the color is 1, then we have b_1 adjacent to each a_i because of the color.
For each 1 ≤ i < j ≤ s and z ∈ [t-1], we have a_i not adjacent to h_j,z because
the associated bit of the color is set to 0, and we have a_j not adjacent to h_i,z by Property
<ref>. For z ≥ 2, we have a_i not adjacent to h_i,z by Property
<ref>. We have a_i not adjacent to a_j by Property
<ref>. And we have b_1 not adjacent to h_i,z by Property <ref>. But we have
b_1 adjacent to each a_i, as well as edges a_ih_i,1 and h_i,1h_i,2, h_i,2h_i,3,
…, h_i,z-1h_z by definition. So the subgraph induced by the considered vertices is S_s, t, which is a contradiction.
Case 3: The first two bits of the color are 0, the third bit is 1, and the (3+z)^th bit
is 0 for each z ∈ [t-1]. Consider the subgraph H induced by the vertices {b_1, …, b_k}∪{ a_1, …, a_k }. For each i < j, a_i is adjacent to b_j due to the third bit of
the color, but b_i not adjacent to a_j due to the second bit of the color. Then we have (G) ≥(H) = r ≥ k, a contradiction.
Case 4: All bits of the color are 0. Consider the subgraph induced by the vertices
{ h }∪⋃_i=1^s { b_i, a_i, h_i,1, …, h_i,t-2} .
Since each bag A_i in the sequence is separated by at least t-1 levels of Y and the first bit of the color is 0, each of the named
vertices above is distinct.
By Property <ref>, h is adjacent to none of the vertices
a_i, h_i,1, …, h_i,t-2 for every i ∈ [s].
For each i < j, we have b_i not adjacent to a_j and a_i not
adjacent to b_j due to the color. For each z ∈ [t-1], we have h_i,z not adjacent to b_i
or b_j due to Property <ref>; and we have h_i,z not adjacent to a_j due to Property
<ref> and a_i not adjacent to h_j,z due to the color. For z ≥ 2 we have
h_i,z not adjacent to a_i due to Property <ref>.
On the other hand, we have edges hb_i for each i ∈ [s] by definition, along with edges
b_ia_i, a_ih_i,1, and h_i,1h_i,2, …, h_i,t-3h_i,t-2. Therefore the induced subgraph is S_s, t, which is a contradiction.
§.§ A Communication Protocol for the Gyárfás Decomposition
Let G be a connected bipartite graph and let Y be a decomposition of G.
For any bag B of Y with depth d = (B), let G_B denote the subgraph of G induced by
B together with all of the descendent bags of B in Y with depth d' ≢d 2.
We require the next two lemmas of <cit.>.
Let B be a bag of Y with (B) = d for any d ≥ 2. Then (G_B) < (G).
Let B be a bag of Y with (B) = 1. Let C be a connected component of G_B and
Y_C be a decomposition of C rooted at a vertex r_C ∈ V(C) ∩ B. Let B' be a bag
of Y_C with (B') = d ≥ 1. Then (C_B') < (G).
We will also require the following easy fact.
Let be the class of bipartite graphs G with (G) = 1. Then () ≤ 2.
Note that the chain-index of P_4, the 4-vertex path, is 2.
Thus each G ∈ is P_4-free and therefore is a disjoint union of bicliques, an equivalence graph.
Therefore Alice and Bob may compute adjacency in G by using 1 bit of communication to ensure that
their input vertices x and y are on opposite sides of the bipartition, and using 1 call to the
oracle to check if x,y are in the same biclique.
Our first main result, <ref>, follows from the next lemma, applied together
with <ref>.
Let be a hereditary class of bipartite graphs that is closed under bipartite complementation, and which satisfies the following conditions:
* There exists a constant k such that () ≤ k.
* There exists a constant ℓ such that for any G = (X,Y,E) ∈, any decomposition
of G has back-degree bounded by ℓ.
Then there exists a constant c such that () ≤ c.
We prove the theorem by induction
on k. The base case k = 1 is established in <ref>. Let x,y ∈ V(G) be
Alice's and Bob's inputs, respectively. We may assume without loss of generality that G is
connected and that x and y are in opposite parts of the bipartition of G, since Alice and Bob
may use one oracle call to check whether their inputs x and y are in the same connected
component, and use 1 bit of communication to determine whether x and y are in opposite parts.
Let Y be a decomposition of G. The communication protocol proceeds as follows. We will
assume that the root vertex of Y is on the left side of the bipartition of G, and that Alice's
input x is on the left side and Bob's input y is on the right side of the bipartition.
* Using 1 bit of communication, Alice tells Bob whether x is the root vertex of Y. If so,
Bob outputs 1 if y has depth 1 in Y and the protocol terminates. The protocol is correct in this
case, since by <ref>, all vertices at depth 1 are adjacent to the root vertex.
* Using 1 bit of communication, Bob tells Alice whether y has depth 1 in Y. If so, they
perform the following:
* Using 1 call to the oracle, Alice and Bob decide if (x) is a descendent of
(y). This is possible because Alice and Bob each know the set of level 1 bags of Y.
If (x) is not a descendent of (y), they output 0 and the protocol terminates.
The protocol is correct in this case, since by <ref>, if x and y are adjacent
then (x) must be the descendent of (y) or vice versa.
* Alice and Bob now agree on B = (y), so they each compute the connected components
C_1, …, C_m of G_B and agree on decompositions Y_1, …,
Y_m of these components, respectively, where the root vertex of each decomposition is on the
right side of the bipartition. Using 1 call to the oracle, they decide if x,y belong to the
same connected component of G_B. If not, they output 1 and terminate the protocol. The
protocol is correct in this case by definition.
* Let i be the index of the component C_i of G_B containing both x and y.
Using 1 bit of communication, Bob tells Alice whether y is the root vertex of Y_i. If
so, Alice outputs 0 if x has depth 1 in Y_i and the protocol terminates. The protocol is
correct in this case, since by <ref> all vertices of depth 1 in Y_i are adjacent
to the root vertex y in G_B (and therefore non-adjacent in G).
* By the assumption of bounded back-degree, _Y_i(x) has edges
to at most ℓ of its ancestors in Y_i. Call these ancestors A_1, …, A_ℓ' where
ℓ' ≤ℓ. Using ℓ calls to the oracle, Alice and Bob determine whether A_j =
B' for some j ≤ℓ', where B' _Y_i(y).
* If A_j = B' then Alice and Bob inductively compute adjacency in the graph
(C_i)_B', which is the bipartite complement of a graph in and therefore is
contained in , and which by <ref> satisfies ((C_i)_B') <
(G). They then output the opposite value and terminate the
protocol. The protocol is correct in this case since, by induction, they will compute
adjacency of x,y in (C_i)_B', which is an induced subgraph of G, so x,y have the
opposite adjacency as in G.
* If A_j ≠ B' for all j, the protocol proceeds as below.
* Similar to step <ref>, _Y_i(y) has edges to at most ℓ
of its ancestors in Y_i. Call these ancestors A_1, …, A_ℓ' where ℓ' ≤ℓ.
Using ℓ calls to the oracle, Alice and Bob determine whether A_j =
B' for some j ≤ℓ', where B' _Y_i(x).
* If A_j = B' then Alice and Bob inductively compute adjacency in the graph
(C_i)_B', which again is contained in and by <ref> satisfies
((C_i)_B') < (G). They then output the opposite value and
terminate the protocol. The protocol is correct in this case since, by induction, they will
compute adjacency of x,y in (C_i)_B', which is an induced subgraph of G, so
x,y have the opposite adjacency as in G.
* If A_j ≠ B' for all j, then Alice and Bob output 1 and the protocol terminates.
The protocol is correct in this case because x,y are adjacent in G if and only if they are
non-adjacent in C_i. By <ref>, if they are adjacent in C_i then either
_Y_i(x) is an ancestor of _Y_i(y) or vice versa. From step
<ref>, we know that if _Y_i(y) is an ancestor of
_Y_i(x), then _Y_i(x) has no edges to _Y_i(y), so x,y are
non-adjacent in C_i and therefore adjacent in G. From the current step, we know
similarly that if _Y_i(x) is an ancestor of _Y_i(y) then x,y are again
non-adjacent in C_i and therefore adjacent in G.
* Now guaranteed that x and y are each in bags at depth 2 or higher, Alice and Bob proceed
similarly as in steps <ref> and <ref>, with the
following differences. Here, Y is used instead of Y_i. In step
<ref>, the protocol outputs 0 instead of 1, because they are operating
on the graph G itself instead of an induced subgraph of the bipartite complement G. When applying the inductive
hypothesis, we use <ref> instead of <ref>, and the players do
not flip the output of the protocol applied to G_B'. Correctness again follows by induction.
This concludes the proof.
§ APPLICATION TO SIGN-RANK 3 AND UNIT DISK GRAPHS
We now prove our results <ref> for graphs of sign-rank 3 and
unit disk graphs. This will require the notion of edge-asteroid triples (see e.g.
<cit.>).
A set of three edges in a graph is called an edge-asteroid triple if for each pair of the edges, there is a
path containing both of the edges that avoids the neighbourhoods of the end-vertices of the third edge (see <ref> for an illustration).
We say that a graph class is edge-asteroid-triple-free if no G ∈ contains an
edge-asteroid triple.
Since S_3, 3 and C_t for t ≥ 10 contain edge-asteroid triples, we make the following
simple observation:
Let G be any bipartite graph that is edge-asteroid-triple-free. Then G is both S_3, 3-free,
and C_t-free for all t ≥ 10.
This observation will allow us to apply our <ref>, but it requires that our graphs
G and their complements both be edge-asteroid-triple-free. Unit disk graphs and graphs of
sign-rank 3 are not necessarily edge-asteroid-triple-free. But we show that we can decompose these
graphs into pieces which satisfy the necessary conditions.
§.§ Sign-Rank 3
To apply <ref> to graphs of sign-rank 3, we will decompose these graphs into
pieces which are edge-asteroid-triple-free. We achieve this by interpreting graphs of sign-rank 3 as
point-halfspace incidences and projecting down into dimension 2.
Let P be a set of points in ^d and H be a set of halfspaces in ^d.
The incidence graph of P and H is the bipartite graph
G(P, H) = ( P, H, { ph | p ∈ P, h ∈ H, and p ∈ h }).
A bipartite graph G is a point-halfspace incidence graph in ^d if it can be
represented as an incidence graph in ^d; more specifically, if there exist a set P of points
and a set H of halfspaces both in ^d such that G is isomorphic to G(P, H). If d=2 we
call the graph a point-halfplane incidence graph.
If there exists such a representation of G, where in addition all pairwise dot products of the
norm vectors of the hyperplanes defining the halfspaces in H are non-negative, then G is called
a positive point-halfspace incidence graph in ^d.
Any bipartite graph G = (U,W,E) of sign-rank d admits a partition U = U_1 ∪ U_2 such that
G[U_1,W] and G[U_2,W] are point-halfspace incidence graphs in ^d-1.
Let a : U →^d, b : W →^d be such that
u ∈ U, w ∈ W are adjacent if and only if ⟨ a(u), b(w) ⟩≥ 0.
We assume, without loss of generality, that a(u)_d ≠ 0 for every u ∈ U, and partition U into
U_1 = { u ∈ U | a(u)_d > 0 } and U_2 = { u ∈ U | a(u)_d < 0 }
We define a' : U →^d-1 and b' : W →^d-1 as
a'(u) = 1/|a(u)_d|(a(u)_1, a(u)_2, …, a(u)_d-1) ,
b'(w) =(b(w)_1, b(w)_2, …, b(w)_d-1).
Further, we define the following sets of points and halfspaces in ^d-1:
P_i = { p_u = a'(u) | u ∈ U_i }, i = 1,2,
H_1 = { h_w | w ∈ W, h_w = { x ∈^d-1 | ⟨ x, b'(w) ⟩≥ -b(w)_d }},
H_2 = { h_w' | w ∈ W, h_w' = { x ∈^d-1 | ⟨ x, b'(w) ⟩≥ b(w)_d }}.
Finally, we define two point-halfspace incidence graphs G_1 = (U_1, W, E_1) and G_2 = (U_2, W, E_2), where
E_1 = { uw | u ∈ U_1, w ∈ W, p_u ∈ h_w } and E_2 = { uw | u ∈ U_2, w ∈ W, p_u ∈ h_w' }.
We claim that G = G_1 ∪ G_2 = (U_1 ∪ U_2, W, E_1 ∪ E_2). Indeed, for any u ∈ U and w ∈ W,
uw ∈ E
⟨ a(u), b(w) ⟩≥ 0
⟨ a'(u), b'(w) ⟩ + (a(u)_d) · b(w)_d ≥ 0
⟨ a'(u), b'(w) ⟩≥ - (a(u)_d) · b(w)_d.
Hence, if u ∈ U_1, then uw ∈ E ⟨ a'(u), b'(w) ⟩≥ - b(w)_d p_u ∈ h_w uw ∈ E_1; and
if u ∈ U_2, then uw ∈ E ⟨ a'(u), b'(w) ⟩≥ b(w)_d p_u ∈ h_w' uw ∈ E_2.
Any point-halfspace incidence graph G = G(P,H) in ^d admits a partition H =
⋃_i=1^2^d H_i such that each G(P,H_i) is a positive point-halfspace incidence graph.
For h ∈ H, let w_h ∈^d and t_h ∈ be such that h = { x ∈^d | ⟨ w_h, x ⟩≤ t_h }.
We partition H into 2^d subsets H_α, α∈{ -1, +1 }^d, with respect to the sign patterns of the norm vectors.
More specifically, h ∈ H_α if and only if (w_h)_i ≥ 0 α_i = +1 for every i ∈ [d].
Clearly, for any α∈{-1,+1}^d and any h, h' ∈ H_α, we have that ⟨ w_h, w_h'⟩≥ 0,
i.e. G(P, H_α) is a positive point-halfplane incidence graph.
From <ref> and <ref> we obtain the following immediate
Any bipartite graph G = (U,W,E) of sign-rank d admits a partition U = ⋃_i=1^2^d U_i
such that each G[U_i,W] is a positive point-halfspace incidence graph in ^d-1.
We now prove that positive point-halfplane incidence graphs are edge-asteroid-triple free.
Every positive point-halfplane incidence graph G is edge-asteroid-triple-free.
Let G be a positive point-halfplane incidence graph and let P and H be sets of points and halfplanes respectively
whose incidence graph is isomorphic to G. For a point p ∈ P we denote by x_p and y_p its
coordinates respectively; for a halfplane h ∈ H we denote by a_h, b_h, t_h the coefficients of the halfplane inequality, i.e.
h = { (x,y) ∈^2 | a_h x + b_h y ≤ t_h }.
Without loss of generality, we can assume that no point in P lies on the boundary of any h ∈
H.
Since G is positive, using translation and rotation, we can further assume that for every h ∈
H both a_h and b_h are non-negative. This latter assumption implies the following useful claim which is straightforward to verify.
Claim 1. For every h ∈ H it holds that if (x,y) ∈ h, then (x',y') ∈ h for every x' ≤ x and y' ≤ y.
Suppose now, towards a contradiction, that G contains an edge-asteroid triple, and let p_1h_1,
p_2h_2,p_3h_3 be its edges, where p_i ∈ P, h_i ∈ H. Note that the points p_1,p_2,p_3 are
pairwise incomparable with respect to the coordinatewise order. Indeed, if for example x_p_1≤
x_p_2 and y_p_1≤ y_p_2, then by Claim 1 we would have p_1 ∈ h_2, i.e. p_1 and
h_2 would be adjacent in G, which would contradict the assumption that the three edges form an
edge-asteroid triple.
Thus, without loss of generality, we assume that x_p_1≤ x_p_2≤ x_p_3 and y_p_1≥ y_p_2≥ y_p_3.
Let Q=(q_1,f_1,q_2,f_2,q_3, …, f_k-1,q_k), q_i ∈ P, f_i ∈ H, q_1 = p_1 and q_k = p_3, be a path containing p_1 and p_3 that avoids the neighbourhoods of both p_2 and h_2.
Since x_q_1≤ x_p_2≤ x_q_k, there exists s ∈ [k-1] such that x_q_s≤ x_p_2≤ x_q_s+1.
As above, using Claim 1, we can conclude q_s is incomparable with p_2, as otherwise
f_s would be adjacent to p_2 or h_2 would be adjacent to q_s, contradicting the choice of Q. Similarly, q_s+1 is incomparable with p_2. Hence, we have that y_q_s≥ y_p_2≥ y_q_s+1.
Let now h_2 be the closure of the complement of h_2, i.e.
h_2 = { (x,y) ∈^2 | a_h_2 x + b_h_2 y ≥ t_h_2}, and let
A_-+ = { (x,y) ∈h_2 | x ≤ x_p_2, y ≥ y_p_2},
A_+- = { (x,y) ∈h_2 | x ≥ x_p_2, y ≤ y_p_2}.
Note that q_s ∈ A_-+ and q_s+1∈ A_+-. Hence the segment connecting q_s and q_s+1 intersects the line x = x_p_2 that separates the two sets. Let (x_p_2, y^*) ∈^2 be the point of intersection.
Since both q_s and q_s+1 are in h_2, so is (x_p_2, y^*), which together with Claim 1 implies that y_p_2≤ y^*.
Similarly, (x_p_2, y^*) is in f_s because both q_s and q_s+1 are in f_s.
Consequently, by Claim 1, p_2 is also contained in f_s. This contradiction completes the proof.
Let G(P,H) be a positive point-halfplane incidence graph.
Then the bipartite complement of G(P,H) is also a positive point-halfplane incidence graph.
Without loss of generality, we can assume that no point in P lies on the boundary of any h ∈
H.
For every point p = (x_p,y_p) ∈ P we define p' := (-x_p,-y_p), and for every
h = { (x,y) ∈^2 | a_h x + b_h y < t_h }∈ H we define h' = { (x,y) ∈^2 | a_h x + b_h y < -t_h }.
Let P' = { p' | p ∈ P } and H' = { h' | h ∈ H }.
We claim that G(P',H') is the bipartite complement of G(P,H).
Indeed, for any p ∈ P and h ∈ H we have that
p ∈ h
a_h x_p + b_h y_p < t_h
a_h (-x_p) + b_h (-y_p) > -t_h
p' ∉h'.
Finally, notice that the norm vector of the hyperplane defining a halfspace h ∈ H is the same as the norm vector of the hyperplane defining h' ∈ H'. Hence, G(P',H') is a positive point-halfplane incidence graph.
Let G = (X,Y,E) be a bipartite graph of sign-rank 3. Then there exists a partition Y =
⋃_i=1^2^3 Y_i such that each G[X,Y_i] is both (S_3, 3, S_3, 3)-free, and (C_t, C_t)-free for all t ≥ 10.
We claim that the partition Y = ⋃_i=1^2^3 Y_i given by <ref> is a desired one.
Indeed, by <ref> and <ref>, we conclude that
for each i ∈ [2^3], the graph G_i G[X,Y_i] and its bipartite complement are both positive point-halfplane incidence graphs. Hence, the lemma follows from <ref> and <ref>.
theoremthmintrosignrank
Let be a graph class with sign-rank at most 3. Then () = O(1) if and only
if is stable.
It suffices to prove that () = O(1) when is stable, due to
<ref>. On graph G = (X,Y,E) and inputs x ∈ X, y ∈ Y, the players
compute the decomposition Y = ⋃_i=1^8 Y_i given by
<ref> and use 3 bits of communication to agree on the value i
such that y ∈ Y_i. Then they compute adjacency in G[X,Y_i] by applying the protocol in
<ref>.
§.§ Unit Disk Graphs
In this section we prove our result for unit disk graphs. A graph G is unit disk if there
exists a mapping ϕ : V(G) →^2 such that xy ∈ E(G) if and only if
ϕ(x)-ϕ(y)_2 < 2. The mapping ϕ is called a realisation of G. Note that
the constant 2 may be replaced with any other constant. We start by observing that unit disk graphs
have sign-rank at most 4.
Any unit disk graph G has sign-rank at most 4.
Let v ↦ (x_v, y_v) ∈^2 for v ∈ V(G), be a realisation of G, such that for any two
distinct vertices a,b ∈ V, ab ∈ E(G) if and only if
(x_a - x_b)^2 + (y_a - y_b)^2 < √(2)
x_a^2 - 2x_a x_b + x_b^2 + y_a^2 - 2y_a y_b + y_b^2 - √(2) < 0
Then, by defining
σ : v ↦ (-1, 2x_v, 2y_v, -x_v^2 - y_v^2) ∈^4,
ψ : v ↦ (x_v^2 + y_v^2 -√(2), x_v, y_v, 1) ∈^4
for v ∈ V(G), we see that for any distinct a,b ∈ V(G)
σ(a), ψ(b) > 0 (x_a - x_b)^2 + (y_a - y_b)^2 < √(2) ,
so (σ(a), ψ(b)) = 1 if and only if ab ∈ E, as desired.
The main tool for our application to unit disk graphs is the following lemma of <cit.>. A
graph G is co-bipartite if its complement G is bipartite.
Let G be a co-bipartite unit disk graph. Then the bipartite graph
G and its bipartite complement do not contain any edge-asteroid triples. In particular,
due to <ref>, G is both
(S_3, 3, S_3, 3)-free, and (C_t, C_t)-free for all t ≥ 10.
Our upper bound on the communication complexity of stable unit disk graphs will follow from a fairly
straightforward decomposition of a unit disk graph into unit-length grid cells, such that between
any two grid cells the graph is co-bipartite.
theoremthmintroudg
Let be a subclass of unit disk graphs. Then () = O(1) if and only if is stable.
It suffices to show that if is stable, then () = O(1), due to
<ref>. Since is stable, there exists a constant k such that for all G
∈, (G) < k. Fix any G ∈ together with its realisation ϕ : V(G) →^2.
For convenience, we will identify the vertices x ∈ V(G) with the corresponding points ϕ(x) ∈^2. On inputs x,y ∈ V(G), Alice and Bob will perform the following protocol.
* Alice and Bob each partition ^2 into a grid with cells C_i,j for i,j ∈, where
C_i,j{ (z_1,z_2) ∈^2 : i ≤ z_1 < i+1, j ≤ z_2 < j+1 }. Observe that if x,y are
adjacent, then if x ∈ C_i,j we must have y ∈ C_i+a, j+b for some a,b ∈{-2,-1,0,1,2};
and if x,y ∈ C_i,j then x-y_2 < √(2) so x,y are adjacent. Let i_x, j_x ∈
be such that x ∈ C_i_x, j_x and let i_y, j_y ∈ be such that y ∈ C_i_y, j_y.
* Using 1 call to the oracle, Alice and Bob check if (i_x, j_x) = (i_y,j_y). If so, they
output 1 and the protocol terminates. In this case, the protocol is correct due to the observation
above.
* For each (a,b), (a',b') ∈{-2,-1,0,1,2}^2 such that (a,b) ≠ (0,0) and (a',b') ≠
(0,0), Alice and Bob use 2 calls to the oracle to check if
both (i_x+a,j_x+b) = (i_y,j_y). If so, then Alice and Bob compute adjacency in the semi-induced
bipartite graph G[X,Y] where X C_i_x,j_x∩ V(G) and Y C_i_y,j_y∩
V(G). This is possible because:
* Alice and Bob each know X and Y: Alice knows (i_x+a,j_x+b) = (i_y,j_y) and Bob knows
(i_y+a',j_y+b')=(i_x,j_x); and
* The graph G[X,Y] has (G[X,Y]) ≤(G) ≤ k, and it is
(S_3, 3, S_3, 3)-free, and (C_t, C_t)-free for all t ≥ 10, by <ref>, so we may apply <ref>.
* If (i_x + a,j_x + b) ≠ (i_y,j_y) for all (a,b) ∈{-2,-1,0,1,2}^2, then Alice and Bob
output 0. In this case the protocol is correct by the observation in step <ref>.
This concludes the proof.
§ THE SIGN-RANK HIERARCHY
We have now determined exactly the conditions required for graphs of sign-rank 3, and some graphs of
sign-rank 4, to have constant randomized communication cost (equivalently, constant margin). Let us
now consider sign-ranks 5 and above. We will see that our techniques for the lower sign-ranks,
specifically the reduction to Equality, will surely fail. This witnesses a certain
threshold between sign-ranks 3 and 5.
It is common to study the bipartite graphs which are K_t,t-free, for some constant t. If a
class of graphs is K_t,t-free it is called weakly-sparse. This is a much stronger
condition than stability: if G is K_t,t-free then it must satisfy (G) ≤ 2t.
A hereditary graph class has bounded degeneracy (equivalently, bounded
arboricity) if there exists a constant d such that every G ∈ has a vertex of degree at
most d (equivalently, if there exists a constant a such that every G ∈ on N vertices
has at most a · N edges). The following is well-known and easy to prove (see <cit.>).
If a hereditary graph class has bounded degeneracy then () = O(1).
Recent results of <cit.> show that, for any constant t, the K_t,t-free point-halfspace
incidence graphs in dimension 3 have bounded degeneracy. Since any graph of sign-rank 4 can be
written as a union of two point-halfspace incidence graphs in dimension 3 (see
<ref>), we obtain the following:
Let be a hereditary graph class that is weakly-sparse and has sign-rank at most 4. Then
() = O(1).
Our <ref> and <ref> are strengthenings of this theorem for the
special cases of sign-rank 3 and unit disk graphs, where we replace the weakly-sparse
condition with the much less restrictive stability condition. The proof of <cit.> uses a
technique based on “shallow cuttings”. In <ref>, we give a simpler proof,
using elementary geometry, of the weaker statement that K_2,t-free point-halfspace incidence
graphs in dimension 3 have bounded degeneracy.
However, it is not possible to extend these results even to sign-rank 5. To see this, we first
require a theorem which shows that, for weakly-sparse classes, bounded degeneracy is equivalent to
the existence of a reduction to Equality.
anonymous
The proof generalizes a theorem of <cit.> and is due to Bonamy, Esperet, & Girão, which
we include here with permission and gratitude.
*
This theorem follows from the next two lemmas. The first lemma is implicit in <cit.>.
Let be a class of bipartite graphs satisfying the following Ramsey property: for any k, ℓ∈ there exists a graph G ∈ such that, for any coloring of the edges of G with at
most k colors, there exists a monochromatic induced path on ℓ vertices. Then ^() =
ω(1).
This lemma was used in <cit.> in conjunction with a result of <cit.> that established
the required Ramsey property for induced subgraphs of hypercubes, which are K_2,3-free, to show
that hypercubes do not have constant-cost reductions to Equality. The next lemma
generalizes this result.
anonymous
For any k, t, ℓ∈, there is an integer d such that, if a K_t,t-free bipartite graph
G has average degree at least d, and its edges are colored with at most k colors, then G
contains a monochromatic induced path of length at least ℓ.
For any k, t, ℓ∈, there is an integer d such that, if a K_t,t-free bipartite graph
G has average degree at least d, and its edges are colored with at most k colors, then G
contains a monochromatic induced path of length at least ℓ.
We first reduce to the case t=2. Suppose t > 2 and let d ≥ t. A result of <cit.>
shows that there is a constant d' such that any K_t,t-free bipartite graph of average degree
at least d' contains a K_2,2-free induced subgraph of average degree at least d. Therefore
it suffices to consider the case t=2.
Choose b > ℓ and set d = 2kb. Consider a K_2,2-free graph G, whose edges are colored
with at most k colors. Then, if the average degree of G is at least d, there exists a color c
∈ [k] such that the graph G_c induced by the edges with color c has average degree at least
2b. Then G_c has an induced subgraph G'_c with minimum degree at least b.
We now construct a monochromatic induced path in G by induction as follows. The base case, a
monochromatic path on 2 vertices, is trivial. Suppose we have obtained an induced path P_s-1 = {
v_1, …, v_s-1}, for s-1 < ℓ where each (v_i, v_i+1) is an edge of G'_c. Let
N'_c(v_s-1) be the neighbors of v_s-1 in G'_c and suppose for contradiction that all
vertices u ∈ N'_c(v_s-1) ∖ P_s-1 are adjacent in G to some v_i with i < s-1.
Since v_s-1 has at least b > ℓ > s-1 neighbors, there are two vertices u,w ∈ N'_c(v_s-1)
∖ P_s-1 that are adjacent in G to both v_s-1 and v_i for some i < s-1. But then
{v_i, v_s-1, u, w} form an induced K_2,2, which is a contradiction. Therefore there exists
a vertex v_s ∈ N'_c(v_s-1) ∖ P_s-1 which produces a monochromatic induced path P_s
= { v_1, …, v_s }. This concludes the proof.
§.§ Sign-Rank 5: Point-Box Incidences
To show that our techniques cannot extend to sign-rank 5, even if we ask for the much stronger
K_2,2-free condition instead of stability, it now suffices to show that there exists a
weakly-sparse class of bipartite graphs with sign-rank 5 and unbounded degeneracy. For this we use
the point-box incidence graphs.
Let P be a set of points in ^2 and H a set of axis-aligned rectangles in ^2. The
incidence graph of P and H is the bipartite graph
G(P, H) (P, H, { ph | p ∈ P, h ∈ H, p ∈ h }) .
The fact that these graphs have sign-rank 5 follows from a transformation of point-box incidences in
dimension 2 to point-halfspace incidences in dimesion 4, which appears in <cit.>. The sign-rank
of point-halfspace incidences in ^4 is at most 5.
The class of point-box incidence graphs has sign-rank at most 5.
What remains is the claim that weakly-sparse point-box incidence graphs on N vertices can have
ω(N) edges. This is true even under the strongest condition of being K_2,2-free. The
lower bound of the next lemma was proved recently in <cit.>, and the upper bound in
<cit.>. We remark that the lemma remains true even if the boxes are restricted to be
dyadic, the product of intervals of the form [s2^t, (s+1)2^t) with integers s,t.
The maximum number of edges in a K_2,2-free point-box incidence graph is Θ( n ·log n/loglog n). As a consequence, K_2,2-free point-box incidence graphs have
unbounded degeneracy.
Combining <ref>, we get:
There is a hereditary class of K_2,2-free bipartite graphs with sign-rank 5 and ()
= ω(1).
§.§ Sign-Rank 6: Point-Line Incidences
The above result shows that reductions to Equality cannot be used to prove () = O(1)
in general, even for weakly-sparse classes, let alone stable ones. This leaves open the possibility
that there is another method for obtaining constant-cost randomized communication protocols for
weakly-sparse or even stable graph classes with sign-rank 5, 6, or any constant. However, we discuss
here a recent conjecture of <cit.> regarding point-line incidences suggesting that weakly-sparse graphs of sign-rank 6 have non-constant communication complexity.
Let P be a set of points in ^2 and L be a set of lines in ^2. The incidence
graph of P and L is the bipartite graph
G(P, L) (P, L, { ph | p ∈ P, ℓ∈ L, p ∈ℓ}) .
Point-line incidence graphs are K_2,2-free by definition, and it is well-known that the
incidence graph between N points and N lines can have Θ(N^4/3) edges; therefore,
<ref> guarantees that they do not reduce to Equality.
Furthermore, it is known that point-line incidence graphs are point-halfspace incidence graphs in ^5 (see e.g. <cit.>), and hence they have sign-rank at most 6:
Point-line incidence graphs have sign-rank at most 6.
The communication complexity of point-line incidence graphs was recently studied in <cit.>,
but it remains unknown whether they have constant-cost. It was conjectured that they do not:
The class of point-line incidence graphs has () = ω(1).
anonymous
Acknowledgments
We are grateful to Marthe Bonamy, Louis Esperet, and Antonio Girão for their proof of
<ref>, and to Louis Esperet for communicating this proof to us and allowing us
to include it here. We thank Lianna Hambardzumyan, Pooya Hatami, and Sebastian Wild for several
conversations on the topic of this paper. The general definition of constant-cost reductions given
in this paper has arisen partly out of collaboration with Yuting Fang, Lianna Hambardzumyan, and
Pooya Hatami. We thank Mónika Csikós for telling us about <ref>.
alpha
§ ON THE NUMBER OF EDGES IN WEAKLY-SPARSE POINT-HALFSPACE INCIDENCE GRAPHS
In this section we show that K_2,s-free point-halfspace incidence graphs in
dimensions 1,2, and 3 have linearly many edges.
The same result was recently obtained by Chan and Har-Peled in <cit.> for more
general classes of K_s,s-free graphs.
We present our results for two reasons. First, the proof technique is completely different and might be of independent interest. Second, our bounds are more specific for the considered cases.
To prove our upper bounds, we will show that every graph in a class has a vertex of bounded degree.
Since the classes are hereditary, this will imply linear bounds on the number of edges.
§.§ On the line
In this section we will show that the K_s,s-free point-halfline incidence graphs on have linear number of edges.
In fact we will show a linear bound on the number of edges in the more general class of the K_s,s-free point-interval incidence graphs.
For this latter class, <cit.> shows that an n-vertex K_s,s-free point-interval incidence graphs with n_p points and n_i intervals contains at most s(n_p+3n_i) edges.
Our bound of (s-1)n = (s-1)(n_p+n_i) is a slight improvement over the bound from <cit.>.
Let G be a K_s,s-free n-vertex point-interval incidence graph. Then G has at most (s-1)n edges.
Let P be a set of points and I be a set of intervals on the real line such that
G ≃ G(P, I).
To prove the statement we will show that G has a vertex of degree at most s-1. Suppose that all vertices of G have degree at least s and let p be the leftmost point in P. The degree assumption implies that p belongs
to at least s intervals, which we denote i_1, i_2, …, i_s. For the same reason, each of these intervals should contain the s-1 points in P closest to p, which we denote p_1, p_2, …, p_s-1.
But then the vertices corresponding to i_1, i_2, …, i_s and p, p_1, p_2, …, p_s-1 induce
the forbidden K_s,s.
§.§ On the plane
In dimensions 2 and 3, the bounds for K_s,s-free graphs from <cit.> are O(sn), and the constants in the big-O are not specified.
Our bounds in dimension 2 and 3 are respectively 3(s-1)n and 5(s-1)n.
To obtain them we will use the following lemma that reduces the
analysis to the case where the points are in convex position.
Let G ≃ G(P, H) be the incidence graph of a set P of points and a set H of
halfspaces in ^d. If G is K_2,s-free and P is not in convex position, then G
has a vertex of degree at most (d+1)(s-1).
Suppose that P is not in convex position, and let p ∈ P be a non-extremal point of the
convex hull (P).
By Carathéodory's theorem, p belongs to the convex hull of at most d+1
extremal points of (P). Let p_1,p_2, …, p_k, k ≤ d+1 be a minimal set of such extremal points.
Since p belongs to the interior of ({p_1, …, p_k}), any halfspace containing p contains one of the points
p_1, …, p_k. Thus, if p belongs to at least k(s-1)+1 halfspaces, one of the points p_1, …, p_k belongs to at least s of them resulting in the forbidden K_2,s.
Hence, the degree of p is at most k(s-1) ≤ (d+1)(s-1).
The polytope graph of a polytope is the incidence graph
of the extremal points and 1-dimensional faces of the polytope. We will need the following
well-known fact.
Let P and H be respectively a polytope and a halfspace in ^d.
The subgraph of the polytope graph of P induced by the extremal point of P that belong to H is connected.
Let G be a K_2,s-free n-vertex point-halfplane incidence graph.
Then G has at most 3(s-1)n edges.
Let P be a set of points on the plane and H be a set of halfplanes such that
G ≃ G(P, H). We assume without loss of generality that |P| ≥ 3.
To prove the lemma we will show that G has a vertex of degree at most 3(s-1).
If P is not in convex position, such a vertex exists by <ref>, so we can assume that all points in P are extremal points of (P).
Suppose that all vertices of G have degree at least 3(s-1)+1 and let p be an arbitrary
point in P.
The polytope graph of P is a cycle, and hence p has exactly 2 neighbours in this graph. <ref> implies that each of the halfplanes that contain p and some other vertices in P should also contain at least one of these 2 neighbours.
Thus, since p belongs to 3(s-1)+1 ≥ 2(s-1)+1 halfplanes in H, at least
s of them contain one other fixed point in P, which witnesses a forbidden K_2,s.
§.§ In ^3
Let G be a K_2,s-free n-vertex point-halfspace incidence graph in ^3.
Then G has at most 5(s-1)n edges.
Let P and H be respectively a set of points and a set of halfspaces in ^3 such that
G ≃ G(P, H). As before, to prove the lemma we will show that G has a vertex of degree at most 5(s-1). Towards a contraction, suppose that all vertices in G have at least 5(s-1)+1 neighbours. This assumption and <ref> imply that P is in convex position, and hence all points in P are extremal points of (P).
Let F be the polytope graph of P. By Steinitz's theorem (see e.g. <cit.>), F is planar, and therefore has a vertex of degree at most 5. Let p ∈ P be such a vertex.
It follows from <ref> that any halfspace in H that contains p also contains
at least one of the neighbours of p in F. Thus, by the pigeonhole principle, at least s halfspaces among those in H that contain p contain also one fixed neighbour of p, which
witnesses the forbidden K_2,s. This contradiction completes the proof.
|
http://arxiv.org/abs/2307.05162v1 | 20230711103858 | SuryaKiran at MEDIQA-Sum 2023: Leveraging LoRA for Clinical Dialogue Summarization | [
"Kunal Suri",
"Prakhar Mishra",
"Saumajit Saha",
"Atul Singh"
] | cs.CL | [
"cs.CL",
"cs.AI",
"cs.LG"
] |
2023
Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0
International (CC BY 4.0).
CLEF 2023: Conference and Labs of the Evaluation Forum, September 18–21, 2023, Thessaloniki, Greece
[
[email protected],
]
[1]
[
[email protected],
]
[1]
[
[email protected],
]
[1]
[
[email protected],
]
Optum, India
[1]Corresponding author.
[1]These authors contributed equally.
Finetuning Large Language Models helps improve the results for domain-specific use cases. End-to-end finetuning of large language models is time and resource intensive and has high storage requirements to store the finetuned version of the large language model. Parameter Efficient Fine Tuning (PEFT) methods address the time and resource challenges by keeping the large language model as a fixed base and add additional layers, which the PEFT methods finetune. This paper demonstrates the evaluation results for one such PEFT method Low Rank Adaptation (LoRA), for Clinical Dialogue Summarization. The evaluation results show that LoRA works at par with end-to-end finetuning for a large language model. The paper presents the evaluations done for solving both the Subtask A and B from ImageCLEFmedical [https://www.imageclef.org/2023/medical]
Dialogue Summarization Parameter Efficient Fine Tuning Clinical Dialogue Summarization
SuryaKiran at MEDIQA-Sum 2023: Leveraging LoRA for Clinical Dialogue Summarization
Atul Singh
August 12, 2023
==================================================================================
§ INTRODUCTION
It is important to record conversations between medical personnel and patients for compliance, training, and evaluation purposes. To that end, summaries of such conversations serve as valuable tools for medical personnel and patients to refer back to and comprehend their prior interactions. Therefore, a concise summary must be produced to facilitate the next medical consultation and provide a source for future reference. Currently, such summaries are created manually; this summarization process is costly and labour-intensive. AI-based summarization techniques can help here by reducing the time and cost associated with manual summarization and facilitating the generation of more accurate representations of doctor-patient conversations by human scribes in less time.
Sequence-to-Sequence (Seq2Seq) Architectures <cit.> have been at the forefront of creating summaries. Transformers <cit.> further improved the performance of this architecture. Over time, we have seen that the performance of these models have improved significantly [https://paperswithcode.com/sota/text-summarization-on-pubmed-1] but it comes at the cost of increased model size which made it very difficult to fit such models on consumer grade hardware such as K80 or T4. Recently a couple of techniques such as LoRA <cit.>, Prefix Tuning <cit.>, P-Tuning <cit.>, Prompt Tuning <cit.> have been introduced which are collectively referred to as Parameter Efficient Fine Tuning (PEFT) techniques. These techniques are used for efficiently adapting pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model’s parameters. PEFT methods only trains a small number of (extra) model parameters, significantly decreasing computational and storage costs because fine-tuning large-scale PLMs is prohibitively costly. For this paper, we use the PEFT implementation from Huggingface [https://huggingface.co/docs/peft/index].
This paper presents the experimental results of our explorations with LoRA on Clinical Dialogs to accomplish both Subtask A and B of <cit.> Shared Tasks from <cit.>. The solution of SubTask B presented in this paper was ranked first among all the submissions for SubTask B. The paper uses LoRA based models for both assigning conversations to a pre-defined set of clinical notes sections and summarization of conversations. Through this work, the paper also compares the performance of fine-tuned Transformer based models with LoRA based models for classification and summarization tasks. In addition to this comparison, we also evaluate impact of ensembling outputs from multiple Seq2Seq models using <cit.>. Our simulations show that LoRA works as well as finetuning of Transformer-based models. This is very important because it shows that we can get the equivalent performance as we get after fine tuning Transformer models while using only a fraction of parameters which means that such models could be fine tuned on consumer grade hardware such as K80 and T4.
This paper is organized as follows. Section <ref> presents a brief overview of SubTask A and B - including available labeled data and evaluation metrics. Then the paper describes current state-of-the-art for dialog classification and summarization in Section <ref> that this paper builds upon. This is followed by the description of the approach used to solve SubTask A in Section <ref> and SubTask B in Section <ref>. Then the results of our solutions for both of these subtasks are presented. Finally, the paper ends with a conclusion of the work. The paper includes an appendix containing exploratory data analysis and material that will help to better understand the solution presented in the paper.
§ RELATED WORK
Finetuning large language models enables better performance for domain-specific use cases. In-context finetuning performs well in few-shot scenarios enabling the end users to provide examples with the prompt to enable LLMs to learn for the use case at hand. This approach does not scales as it restricts sending multiple examples with the prompt. End-to-end finetuning of LLMs is resource and time intensive and has the additional drawback of storing and managing multiple copies of large-size models.
Parameter Efficient Fine Tuning (PEFT) Methods attempt to solve the problems mentioned above by finetuning a smaller number of existing or newly introduced parameters of the large language model while keeping the rest of the parameters frozen. In <cit.>, Lilian et al. divide PEFT methods into the following four categories: additive, selective, reparameterization-based, and hybrid methods. Additive methods such as adapters <cit.> introduce and train only a new set of parameters or layers. Selective methods finetune only a few top layers of the network. Reparametrization-based methods use a low-dimensional representation of the network to reduce the number of parameters to be trained during finetuning. This paper evaluates Low-Rank Adaptation (LoRA) a prominent example of this category of methods.
Parameter Efficient Fine Tuning (PEFT) methods reduce the need to host a large-sized model for each use case. They enable users to use a frozen base model with a small layer of model weights that vary with the use case. In <cit.>, the authors compare the performance of four different PEFT techniques for scenarios where low, medium and high counts of samples are available for fine-tuning. The evaluation results show that LoRA gives near-best performance when low to medium data samples are available for summarization tasks. In another similar related study in <cit.>, the evaluations demonstrate that the best summarization for radiology reports is achieved using a model pre-trained on the clinical text and then fine-tuned using LoRA. In this paper, the authors have used LoRA and ensembling for summarization.
§ TASK DESCRIPTION
This Section provides a high-level overview of the MEDIQA-Sum 2023 Task (including both SubTask A and B) from ImageCLEFmed MEDIQA<cit.>. The Section starts with a description of different SubTask goals followed by basic counts of available labeled data. The metric used to evaluate this task is arithmetic mean of ROUGE-1 <cit.>, Bertscore F1 <cit.>, and BLEURT <cit.>.
§.§ Task Definition
Given a short conversation between a Doctor and a patient or another Doctor (Dialogue), the goal of SubTask A is to create a system that automatically predicts the Section to which the conversation belongs to which is denoted by Section Header. There are twenty Sections Headers in this dataset. Some examples of Section Headers are FAM/SOCHX, GENHX, PASTMEDICALHX, CC. All of these Section Headers and their descriptions (Section Description) can be found in Table <ref>. The goal of SubTask B is to create a system that generates a summary which matches the human generated summary (Section Text) as closely as possible while optimizing the metric for evaluation.
§.§ Labeled Data
In this paper we have used the labeled data provided by MEDIQA-Sum 2023 organizers for training the models. A sample data point from the labeled data set for SubTask A and B can be found in Table <ref>. The official data consists of a training and validation split. For SubTask A and B, training data contains 1201 and validation data contains 180 <dialogue, section-text, section-header> triplets.
§ SUBTASK A METHODOLOGY
Given a short conversation between a doctor and a patient, the goal of SubTask A is to predict its Section Header. This Section starts with a description of the approach used to predict the Section Header.
We have achieved success using Bio-ClinicalBERT <cit.> for classification in the healthcare domain. Hence we choose it as the backbone and initialize LoRA layer on top of it. We use this architecture for classification of Dialogue to a Section Header in SubTask A. We limit the number of input tokens to 300 tokens because that is the length of majority of dialogues, as shown in Figure <ref>. We use a 3 Fold Cross Validation approach for modeling purposes. This is to ensure that we capture all information in the data. For every fold, we split its test part into validation and test. We do this so that we can use validation split to select best model using Early Stopping and test split to calculate its performance. The hyper-parameters used for training and performance for all folds can be found in Table <ref>. During inference, we pass a given Dialogue through all three models, take an average of the logits for all the classes and output the class with the highest logit score.
§ SUBTASK B METHODOLOGY
Given a short conversation between a doctor and a patient, the goal of SubTask B is to summarize it while ensuring that the generated summary is as fluent and as close to Section Text as possible. This Section starts with a description of the methodology used to summarize the conversation. For Dialogue Summarization, we have trained a LoRA layer on top of Seq2Seq models. This Section also describes the processed labeled data used for training these models, followed by the actual training steps. Then this Section looks at the steps used to generate the summary from the decoder. Finally, we discuss the approach used for ensembling the outputs of these models.
We train LoRA based Seq2Seq models using labeled data (Dialogue + Section Header, Section Text) as (Input, Output) pair. Section Text is a part of the labeled data and is a human subject matter expert-created summary of Dialogue. As a preprocessing step, we replace all new line characters with whitespaces. The Dialogue is concatenated with the section description of its Section Header by the SEP token of the Seq2Seq architecture. During training and inference, we use the actual section description for the actual Section Header. No changes are made to Section Text.
We use a 3-fold cross validation scheme as described in <ref> and train LoRA on two Seq2Seq architectures - BioBart-V2-Large <cit.> and Flan-T5-Large <cit.>. Here we need to select the number of input tokens for encoder and decoder. For encoder, we have selected token length of 512 tokens and for decoder, we have selected token length of 400 tokens. All the hyper-parameters used to train each of the above architecture can be found in Table <ref>. To select the best model, we use early-stopping <cit.> based on Validation Negative Log Loss. Results on the test part of each of these models can be found in Table <ref>. The distribution of tokens for Dialogue and Section Text can be found in Figure <ref> and Figure <ref> respectively.
To generate summaries that match the human generated summaries, we need a way to control the text generated by the decoder component of a Seq2Seq model. This can be done by using decoding strategies such as Beam Search <cit.>, Top-k Sampling <cit.>, Top-p Sampling <cit.>, Contrastive Search <cit.> etc. In this module, we use Beam Search with TPESampler Algorithm from Optuna[<https://optuna.readthedocs.io/en/stable/reference/samplers/generated/optuna.samplers.TPESampler.html>] to search for the optimal decoding strategy trying to maximize ROUGE-1, ROUGE-2, and BertScore rather than relying on manual tweaking of these metrics. We use TPESampler here because it supports multivariate optimization and also it handles Float, Integer, and Categorical values better than other algorithms present in Optuna[<https://optuna.readthedocs.io/en/stable/reference/samplers/index.html>]. We use Optuna here due to ease of implementing Hyper-parameter optimization algorithms. We did not use BLEURT during search because it is extremely time consuming. For this module, we use four hyper-parameters for Beam Search - Early Stopping, Number of Beams, No Repeat N-gram Size, Length Penalty. The search space of each of these variables can be found in the Table <ref>.
The results from the different models are ensembled using Generating Best Summary by semantic similarity - a post-ensemble method <cit.> to identify the summary which is closest to all the generated summaries. The paper uses this output summary as the final summary for the given Dialogue.
§ SUBTASK A RESULTS AND ANALYSIS
This Section presents the results for SubTask A using the approach described in Section <ref>.
We have made only one submission for predicting Section Header whose Multi Class Accuracy was 73.5% on the test set given by the organizers, obtaining a rank of 8 among 23 submissions. In this submission, we pass Dialogues through all three LoRA based Bio-ClinicalBERT models, take an average of the logits for all the classes and output the class with the highest logit score. The table containing our team's standing can be found in the Tables <ref>. Standings of all the teams have been calculated using multi class accuracy. We compared performance of Bio-ClinicalBERT when it is fine-tuned end-to-end and when it is used as a backbone for LoRA. We observe that Bio-ClinicalBERT with LoRA score 73.3% on validation data whereas end-to-end fine-tuned Bio-ClinicalBERT score 72% on the same validation data.
§ SUBTASK B RESULTS
This Section presents the results for SubTask B using the approach described in Section <ref>.
We have made three submissions (mentioned as runs in the result tables) for generating summaries from Dialogues. For the summarization task, we have submitted results from three runs. In run 1 and run 2, we train LoRA on BioBart-V2-Large and Flan-T5-Large respectively while run 3 presents the results of ensembling summaries from both of these models. The details for each run are as follows:
* Run 1 - We generate summary from BioBart-V2-Large model trained on each fold and ensemble output of all the models using <ref>
* Run 2 - We generate summary from Flan-T5-Large model trained on each fold and ensemble output of all the models using Generating Best Summary by semantic similarity.
* Run 3 - We generate summary from BioBart-V2-Large and Flan-T5-Large model trained on each fold and ensemble output of all the models using Generating Best Summary by semantic similarity.
The table containing our team's standing can be found in Table <ref>. Standings of all the teams have been calculated by calculating arithmetic mean of Rouge-1, Bertscore, BLEURT for the Dialogue summary.
The experiments show that Run3 performs the best scoring rank 1 out of 13 submissions. This is also intuitive since it contains summaries from 3 models of BioBART-V2-Large and 3 models of Flan-T5-Large. Run2 scored 5th rank and Run1 scored 6th rank. This is an interesting observation since Flan-T5-Large is an enhanced version of T5 that has been finetuned in a mixture of tasks whereas BioBart-V2-Large has been trained solely on medical corpus so ideally Run1 should have scored better than Run2 but it seems that bigger models work better than domain specific models although this hypothesis needs to be validated.
§.§ Analysis of different Transformer Architectures on SubTask B
We compare performance of BioBart-V2-Large and Flan-T5-Large when they are fine-tuned end-to-end and they are treated as backbone for LoRA. We observe that the models trained with LoRA perform better than the models which were fine-tuned end-to-end. The performance was evaluated by calculating arithmetic mean of ROUGE-1, ROUGE-2, and BertScore-F1. We do not use BLEURT here as it is extremely time consuming and based on our observations, ROUGE-2 and BLEURT have a very strong correlation. The average score across all folds for each architecture can be found in the Table <ref>.
§ CONCLUSION
The paper presents the solution and the results for SubTask A and B of ImageCLEFmed MEDIQA-Sum task. The solution uses LoRA to finetune Transformer based models to classify and summarise Clinical Dialogues, and our simulation results show that the performance of Transformer based models finetuned using LoRA is equivalent to the performance of Transformer based models finetuned using resource and time-intensive end-to-end finetuning. The success of Transformer based model finetunes using LoRA implies organizations can easily finetune and deploy domain-based models.
The authors observe that metrics such as ROUGE are ineffective for evaluating the performance of models like OpenAI GPT3 as they focus on syntactic similarity. Metrics such as Bertscore and BLEURT seem more suitable for such models since they focus on semantic similarity. Finally, the paper also evaluates two different ensemble techniques, and the results demonstrate that the Post Ensemble technique performs the best while giving minimum hallucinations.
§ APPENDIX
§.§ Data Exploration and Explanation
This section discusses data exploration and explanation so that audience can understand why we made the decisions that we made.
A sample data point from dataset for SubTask A and B can be seen in Table <ref>.
The description of each of the Section Headers present in the data can be found in Table <ref>
The Class distribution of Section Headers for SubTask A is give by Figure <ref>
The Dialogue Token Distribution for SubTask A and B is give by Figure <ref>
The Clinical Note Token Distribution for SubTask B is give by Figure <ref>
The hyper-parameters and performance metrics for Predicting Section Header i.e SubTask A can be found in the Table <ref>.
The hyperparameters used to fine tune Seq2Seq Models and LoRA i.e. SubTask B can be found in Table <ref>. Each of these models were trained on 150 epochs, Gradient Accumulation of 16, Learning rate of 1e-3, AdamW optimizer, and Linear Learning Scheduler.
The performance of different Seq2Seq Models using LoRA and Fine-tuning can be found in Table <ref>
§.§ Standing of our team
Our standings (in bold) for SubTask A - Section Header Classification is in Table <ref>. We omitted several teams from these standings and represent them by Ellipsis (...). This is done only to conserve space.
Our standings (in bold) for SubTask B - Summarization is in Table <ref>
|
http://arxiv.org/abs/2307.03924v1 | 20230708074639 | Real-Time Simulation of Open Quantum Spin Chains with Inchworm Method | [
"Geshuo Wang",
"Zhenning Cai"
] | quant-ph | [
"quant-ph",
"cond-mat.stat-mech"
] |
Enhancing Room Security and Automating Class Attendance Using ID Cards
Shravan Bhat – 171EE240, Nithin R – 171EC131, Pranav S - 171EC135
August 12, 2023
========================================================================
We study the real-time simulation of open quantum systems,
where the system is modeled by a spin chain,
with each spin associated with its own harmonic bath.
Our method couples the inchworm method for the spin-boson model and the modular path integral methodology for spin systems.
In particular, the introduction of the inchworm method can significantly suppress the numerical sign problem.
Both methods are tweaked to make them work seamlessly with each other.
We represent our approach in the language of diagrammatic methods,
and analyze the asymptotic behavior of the computational cost.
Extensive numerical experiments are done to validate our method.
§ INTRODUCTION
An open quantum system refers to a quantum-mechanical system coupled to an environment.
The coupling can significantly affect the quantum dynamics,
resulting in effects such as quantum dissipation and quantum decoherence.
It can also lead to non-Markovian evolution of the quantum system,
posing significant challenges in the numerical simulation.
Nevertheless, the study of open quantum systems is becoming increasingly important and has practical applications in many fields <cit.>, as real-world systems are never completely isolated.
In the simulation of open quantum systems, a simple harmonic bath is generally assumed so that the effect of the bath on the system can be analytically given by the bath influence functional <cit.>,
allowing the path integral approach <cit.> to be used to formulate the system dynamics.
One classical method based on path integrals is the quasi-adiabatic propagator path integral (QuAPI) <cit.>.
Other methods have been developed based on QuAPI to improve simulation efficiency by reducing computational complexity or enhancing computational accuracy, including the iterative QuAPI method <cit.>,
the blip decomposition of the path integral <cit.>
and differential equation-based path integral method (DEBPI)
<cit.>.
Due to the non-Markovian nature of the dynamics,
the path-integral-based methods often suffer from increasing memory costs for longer simulation time.
The small matrix decomposition of the path integral (SMatPI) <cit.>, however, has successfully overcome the problem by summarizing the contribution of the paths into small matrices representing the kernel of the quantum master equation.
An alternative approach to dealing with the high memory cost in simulating quantum systems is to use the quantum Monte Carlo method to evaluate the high-dimensional integrals in the Dyson series <cit.>.
However, the Monte Carlo method introduces stochastic errors and can lead to the so-called “sign problem” for highly oscillatory integrands <cit.>.
To relieve the sign problem, the inchworm Monte Carlo method was developed in <cit.>,
which takes the idea of bold diagrammatic Monte Carlo method introduced in <cit.>.
The idea is to compute quantum propagators for shorter time intervals,
and then combine them into the propagators of longer time intervals.
The extension of the propagators can also be formulated into an integro-differential equation <cit.>,
so that classical numerical methods can be applied.
The inchworm Monte Carlo method has been proven to be successful in reducing the severity of sign problem <cit.>.
Some efficient numerical methods for solving the integro-differential equation has been discussed in <cit.>.
The methods discussed above are mainly focused on simple systems such as a single spin or other systems with a small number of possible states,
since the dimension of the Hilbert space for a system grows exponentially with the number of particles.
As a result, simulating more complex systems requires new approaches.
One such approach is the method of modular path integral (MPI) <cit.>,
which leads to linear scaling with the number of particles.
Other methods apply tensor train decomposition to keep the memory cost low for large systems <cit.>, which utilizes low-rank approximations to reduce the computational and memory cost.
In these methods, a typical system under consideration is the Ising chain model, a one-dimensional chain of interacting spins <cit.>.
The Ising model has wide application in magnetism <cit.>, neuroscience <cit.> and many other fields.
The dynamics of closed Ising chains is well-studied in the literature <cit.>.
Recently, there has been more research focusing on the dissipative Ising chain <cit.>.
This paper focuses on the evolution of an Ising chain coupled with harmonic baths,
which are characterized by the Ohmic spectral density <cit.>.
The Ising model used in this study is introduced in <Ref>.
In <Ref>,
we propose a diagrammatic representation of the model based on the special structure of the Ising chain.
The computation of the diagrams is introduced in detail in <Ref> and <Ref>.
<Ref> mainly discusses the computation of diagrams for each single spin,
and <Ref> contains the algorithm for merging the diagrams.
The estimation of the computational cost is given in <Ref>, and numerical experiments are given in <Ref>.
Finally, in <Ref>, we provide some concluding remarks and introduce possible future works inspired by our results.
§ ISING CHAIN WITH SPIN-BATH COUPLING
This section provides a brief introduction to the model studied in this paper,
which is an Ising chain coupled with baths consisting of harmonic oscillators.
In this model, the baths for different spins are not directly coupled.
An isolated Ising chain is a chain of spins in which each spin couples with its nearest neighbors <cit.>.
The Hamiltonian for an Ising chain with K spins is generally given by
H_Ising
= ∑_k=1^K H_s^(k)
+ ∑_k=1^K-1 U^(k)⊗ V^(k+1).
where
H_s^(k) = ϵ^(k)σ_z^(k) + Δ^(k)σ_x^(k)
with σ_x^(k),σ_z^(k) being Pauli matrices
for the kth spin in the chain.
The parameter ϵ^(k) describes the energy difference between two spin states
and Δ^(k) is the frequency of the spin flipping.
The term U^(k)⊗ V^(k+1) describes the nearest-neighbor coupling between the kth and (k+1)th spins.
In this paper, a more complicated case is studied
where each spin in the Ising chain is coupled with a harmonic bath.
The total Hamiltonian for the whole system-bath is then given by
H = H_Ising + ∑_k=1^K H_b^(k) + ∑_k=1^K W_s^(k)⊗ W_b^(k)
where
H_b^(k) = ∑_j1/2[(p̂_j^(k))^2 + (ω_j^(k))^2 (q̂_j^(k))^2],
W_s^(k) = σ_z^(k),
W_b^(k) = ∑_j c_j^(k)q̂_j^(k).
In this expression, p̂_j^(k) and q̂_j^(k) are the momentum operator and the position operator of the jth harmonic oscillator in the bath of the kth spin, respectively.
ω_j^(k) is the frequency of the jth harmonic oscillator in the bath of the kth spin and c_j^(k) is the coupling intensity between the kth spin and the jth oscillator in its bath.
<Ref> illustrates the overall Hamiltonian and the coupling relation in this model more intuitively
with a Ising chain with 4 spins.
Similar to the assumption in <cit.>,
in the paper, the baths for different spins are not directly coupled with each other.
Similar to <cit.>, we simply use U^(k) = V^(k) so that our method can be better illustrated by diagrams in the following sections.
The method discussed in this paper is also applicable to a more general system U^(k)≠ V^(k).
As for the initial condition, the spins and the baths are assumed to be decoupled.
More specifically,
the kth spin is assumed to be in the state |ς^(k)⟩
and the baths are at their thermal equilibriums.
The initial density matrix for the whole system is then given by
ρ(0)
= ⊗_k=1^K ρ^(k) (0)
= ⊗_k=1^K ( ρ_s^(k)(0) ⊗ρ_b^(k)(0) )
= ⊗_k=1^K ( ς^(k)⊗exp(-β^(k) H_b^(k))/(exp(-β^(k) H_b^(k))))
where β^(k) is the inverse temperature for the kth bath <cit.>.
§ DIAGRAMMATIC REPRESENTATION OF THE PATH INTEGRAL
In this section, we rewrite the evolution of the spin chain system using path integrals, so that the computation of each spin can be decoupled. Such an approach has been studied in many previous works <cit.>, and here we are going to represent the path integrals using diagrams to facilitate our future discussions.
We first split the total Hamiltonian in <ref> into two parts H = H_0 + V, where
H_0 ∑_k=1^K H_0^(k)∑_k=1^K
(H_s^(k) + H_b^(k) + W_s^(k)⊗ W_b^(k)),
V ∑_k=1^K-1 V^(k)⊗ V^(k+1).
Below, we will assume that the interaction between spins V is a perturbation of the unperturbed Hamiltonian H_0,
and describe the dynamics in the interaction picture.
Given an observable O = O_s ⊗Id_b,
we can define the following propagator
G(-t,t) = ^-H_0 t ^H t O ^-H t^H_0 t,
which can be expanded into the following Dyson series
G(-t,t) = ∑_N=0^∞∫_-t⩽⩽ t(∏_n=1^N (s_n))
𝒯[V_I(s_N) ⋯ V_I(s_1) O_s,I(0)]
where
V_I(s_n) ^- H_0 | s_n | V ^ H_0 | s_n |,
O_s,I(0) = O_s
and 𝒯 is the time ordering operator
that sorts all the operators in the time descending order.
The integrals in the equation is interpreted as
∫_-t⩽s⩽ t(integrand)s
= ∫_-t^t∫_-t^s_N…∫_-t^s_2(integrand) s_1
… s_N-1 s_N
Note that the coefficient ∏_n=1^N (s_n) comes from the coupling operators V, meaning that each V_I(s_n) is attached by or - according to the sign of s_n.
With this propagator, the expectation of the observable can be expressed by <cit.>
O_s(t)
= (ρ_I(t) G(-t,t))
with ρ_I(t) = ^- H_0 tρ(0) ^ H_0 t.
If the observable has the form O_s = O_s^(1)⊗…⊗ O_s^(K),
we can plug the definition of V in <ref> into the Dyson series <ref>, so that the integrand will show N summation symbols, and each summand can be written in the tensor product form.
Precisely speaking, for the kth spin, the summand has the form:
𝒢^(k)(s')
= (∏_n'=1^N'√( (s_n'')))
𝒯[
V_I^(k)(s_N'') … V_I^(k)(s_1') O_s,I^(k)(0)
],
where s' is a subsequence of s of length N' ⩽ N.
In particular, if s' is an empty sequence, we use the notation 𝒢^(k)(∅) O_s,I^(k)(0) to denote the above quantity.
Here we have again used the interaction picture:
V_I^(k)(s_n) ^- H_0^(k)| s_n|
V^(k)^ H_0^(k)| s_n |,
O_s,I^(k)(0) = O_s^(k).
In <ref>,
the subsequence ' depends on the number of operators V^(k) appearing in the summand,
and the reason for the square root is that the term V^(k)⊗ V^(k+1) or - V^(k)⊗ V^(k+1),
appearing in the expansion of V or - V,
is separated into the terms 𝒢^(k) and 𝒢^(k+1) after decomposition.
In this work, we stick to the choice √() = ^π/4 and √(-) = ^-π /4.
With these propagators, the terms in <ref>
can be represented by the sum of integrals whose integrands are tensor products of 𝒢^(k)(s).
For example, when N=1 and K=4,
we have
∫_-t^t
(s_1)
𝒯[V_I(s_1)O_s,I(0)]
s_1
=∫_-t^t 𝒢^(1)(s_1)
⊗𝒢^(2)(s_1)
⊗𝒢^(3)(∅)
⊗𝒢^(4)(∅) s_1
+∫_-t^t 𝒢^(1)(∅)
⊗𝒢^(2)(s_1)
⊗𝒢^(3)(s_1)
⊗𝒢^(4)(∅) s_1
+∫_-t^t 𝒢^(1)(∅)
⊗𝒢^(2)(∅)
⊗𝒢^(3)(s_1)
⊗𝒢^(4)(s_1) s_1.
In this equation,
different spins are separated inside the integrals, allowing us to perform computations for each spin independently.
For simplicity, we may express the above equation as a diagrammatic equation:
[baseline=0]
[fill=black] (-1,-0.1) rectangle (1,0.1);
[text=black,anchor=north] at (-1,0) -t;
[text=black,anchor=north] at (1,0) t;
plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (-0.5,0);
[text=black,anchor=north] at (-0.5,0) s_1;
=
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.6) – (-0.5,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
In this diagrammatic equation,
the bold line on the left-hand side represents an operator acting on all spins.
The red cross indicates that only one coupling operator at time s_1 exists in the integral. On the right-hand side,
each gray line represents a single spin.
Since each interaction operator V consists of three terms, each acting on two neighboring spins,
we have three diagrams on the right-hand side,
and each diagram includes two red crosses connected by a dotted line,
indicating the two involved spins.
By comparison with (<ref>),
we can find that every diagram on the right-hand side is an integral with respect to s_1,
and the kth line corresponds to the expression 𝒢^(k)(…),
where the ellipses should be filled with the time points of the red crosses. In this case, the ellipses can only be a single point s_1 or an empty set.
Similarly, for the term with two coupling operators (N=2), the expansion is
[baseline=0]
[fill=black] (-1,-0.1) rectangle (1,0.1);
[text=black,anchor=north] at (-1,0) -t;
[text=black,anchor=north] at (1,0) t;
plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (-0.5,0);
[text=black,anchor=north] at (-0.5,0) s_1;
plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (0.3,0);
[text=black,anchor=north] at (-0.5,0) s_1;
[text=black,anchor=north] at (0.3,0) s_2;
=
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=red] at (0.3,0.6) ×;
[text=red] at (0.3,0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=red] at (0.3,0.2) ×;
[text=red] at (0.3,-0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=red] at (0.3,-0.2) ×;
[text=red] at (0.3,-0.6) ×;
[black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=red] at (0.3,0.6) ×;
[text=red] at (0.3,0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=red] at (0.3,0.2) ×;
[text=red] at (0.3,-0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=red] at (0.3,-0.2) ×;
[text=red] at (0.3,-0.6) ×;
[black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6);
[text=red] at (0.3,0.6) ×;
[text=red] at (0.3,0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6);
[text=red] at (0.3,0.2) ×;
[text=red] at (0.3,-0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6);
[text=red] at (0.3,-0.2) ×;
[text=red] at (0.3,-0.6) ×;
[black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
.
Here the left-hand side corresponds to the two-dimensional integral in <ref>.
On the right-hand side,
we have nine diagrams since both interaction operators at s_1 and s_2 have three choices.
For general N and K,
the number of diagrams should be (K-1)^N.
In particular,
for the first term in <ref>
where no interaction exists,
no integral is required and we have
O_s(0) = O_s = O_s^(1)⊗ O_s^(2)⊗ O_s^(3)⊗ O_s^(4)⊗
= 𝒢^(1) (∅)
⊗𝒢^(2) (∅)
⊗𝒢^(3) (∅)
⊗𝒢^(4) (∅),
which can be represented by the following diagrammatic equation:
[baseline=0]
[fill=black] (-1,-0.1) rectangle (1,0.1);
[text=black,anchor=north] at (-1,0) -t;
[text=black,anchor=north] at (1,0) t;
=
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
.
As a result, the final diagrammatic expansion of
G(-t,t) is
G(-t,t)
= [baseline=0]
[fill=black] (-1,-0.1) rectangle (1,0.1);
[text=black,anchor=north] at (-1,0) -t;
[text=black,anchor=north] at (1,0) t;
+
[baseline=0]
[fill=black] (-1,-0.1) rectangle (1,0.1);
[text=black,anchor=north] at (-1,0) -t;
[text=black,anchor=north] at (1,0) t;
plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (-0.5,0);
[text=black,anchor=north] at (-0.5,0) s_1;
+
[baseline=0]
[fill=black] (-1,-0.1) rectangle (1,0.1);
[text=black,anchor=north] at (-1,0) -t;
[text=black,anchor=north] at (1,0) t;
plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (-0.5,0);
[text=black,anchor=north] at (-0.5,0) s_1;
plot[only marks, mark=x, mark options=color=red, scale=2, ultra thick] coordinates (0.3,0);
[text=black,anchor=north] at (-0.5,0) s_1;
[text=black,anchor=north] at (0.3,0) s_2;
+ …
=
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.6) – (-0.5,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
+ [baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=red] at (0.3,0.6) ×;
[text=red] at (0.3,0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=red] at (0.3,0.2) ×;
[text=red] at (0.3,-0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=red] at (0.3,-0.2) ×;
[text=red] at (0.3,-0.6) ×;
[black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=red] at (0.3,0.6) ×;
[text=red] at (0.3,0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=red] at (0.3,0.2) ×;
[text=red] at (0.3,-0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=red] at (0.3,-0.2) ×;
[text=red] at (0.3,-0.6) ×;
[black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6);
[text=red] at (0.3,0.6) ×;
[text=red] at (0.3,0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6);
[text=red] at (0.3,0.2) ×;
[text=red] at (0.3,-0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6);
[text=red] at (0.3,-0.2) ×;
[text=red] at (0.3,-0.6) ×;
[black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
+ …,
where the right-hand side includes all possible connections between neighboring spins.
The advantage of this expansion is two-fold:
* For each diagram, when the time points s_1, ⋯, s_N are fixed, the kth line with crosses is mathematically represented by 𝒢^(k)(), which involves only one spin, so that it can be computed relatively easily.
* We can shuffle the diagrams and truncate the series appropriately to obtain efficient algorithms.
The idea for the computation of each line on the right-hand side will be based on an efficient path integral method known as the inchworm method <cit.>,
and our algorithm for the integration over the time points and the summation of the diagrams is inspired by the method of modular path integrals <cit.>.
The following two sections will be devoted to these two steps, respectively.
§ INCHWORM ALGORITHM FOR EACH SPIN
Recall that our purpose is to compute the expectation of the observable in the form of <ref>.
Based on our decomposition
<ref>,
we can first take the trace for each diagram,
and then sum up the results.
Thus, for each diagram, we need to compute ( ρ_I^(k)(t) 𝒢^(k)() ) with ρ_I^(k)(t) = ^- H_0^(k) tρ^(k)(0) ^ H_0^(k) t.
In this section,
we will introduce an efficient algorithm to evaluate this single-spin quantity 𝒢^(k)() for given .
The algorithm is inspired by the inchworm Monte Carlo Method for system-bath coupling <cit.>,
where a single heat bath interacts with the entire system.
Note that each spin is associated with a thermal bath,
we can apply the Dyson series expansion again to separate the spin and the bath.
Since the baths are initially in the thermal equilibrium states,
the trace with respect to the bath part can be calculated explicitly using Wick's theorem <cit.>.
We refer the readers to <cit.> for the detailed calculation,
and here we only present the final result:
(
ρ_I^(k)(t) 𝒢^(k)()
)
= _s^(k)[
ρ_s,I^(k)(t)
(∏_n=1^N √( (s_n)))
∑_M=0^∞^M ∫_-t ⩽τ⩽ t( ∏_m=1^M (τ_m) )
𝒰_0^(k)(τ,s)
ℒ_b^(k)(τ)
τ],
where
𝒰_0^(k)(τ,s)
= 𝒯[V_s,I^(k)(s_1) … V_s,I^(k)(s_N) W_s,I^(k)(τ_1) … W_s,I^(k)(τ_M) O_s,I^(k)(0)]
with
V_s,I^(k)(s)
= ^- H_s^(k)| s | V^(k)^ H_s^(k)| s |
,
W_s,I^(k)(τ)
=
^- H_s^(k)|τ|
W^(k)^ H_s^(k)|τ|,
ρ_s,I^(k)(t) =
^- H_s^(k) tρ_s^(k) (0) ^ H_s^(k) t
and the bath influence functional ℒ_b^(k)(τ) has the form <cit.>
ℒ_b^(k)(τ_1,…,τ_M)
=
0, if M is odd
∑_𝔮∈𝒬_M∏_(j,j')∈𝔮 B^(k)(τ_j,τ_j'),
if M is even.
Here B^(k) is the two-point correlation function to be defined later in our test cases,
and the set 𝒬_M contains all possible pairings of integers {1,2,⋯,M}.
For example,
𝒬_2 = {{(1,2)}},
𝒬_4
= {{(1,2),(3,4)}, {(1,3),(2,4)}, {(1,4),(2,3)}}.
The general definition of 𝒬_M for even M is
𝒬_M = {{(j_1,j_1'),…,(j_M/2,j_M/2')}|⋃_l=1^M/2{j_l,j_l'} = {1,…,M},
j_l < j_l' for l = 1,…,M/2
},
which includes (M-1)!! pairings.
According to <ref>, now our objective is to evaluate the following quantity
𝒢^(k)(-t,s,t)
(∏_n=1^N √( (s_n)))
∑_M=0^∞∫_-t ⩽τ⩽ t( ∏_m=1^M (τ_m) )
𝒰_0^(k)(τ,s)
ℒ_b^(k)(τ)
τ,
which yields
(ρ_I^(k)(t) 𝒢^(k)() ) = _s^(k)(ρ_s,I^(k)(t) 𝒢^(k)(-t,,t) ).
Recall that we have used a gray line with red crosses to represent 𝒢^(k)().
Due to the equivalence given in <ref>,
below we will use the same diagram to represent the quantity 𝒢^(k)(-t,,t).
For example, given = (s_1,s_2) with both s_1 and s_2 between -t and t,
<ref> can be represented diagrammatically as
[baseline=0]
[fill=lightgray] (-2,-0.05) rectangle (2,0.05);
[text=red] at (-1,0) ×;
[text=red] at (0.6,0) ×;
[text=black,anchor=north] at (-2,0) -t;
[text=black,anchor=north] at (2,0) t;
[text=black,anchor=north] at (-1,0) s_1;
[text=black,anchor=north] at (0.6,0) s_2;
=
[baseline=0]
[black] (-2,0) – (2,0);
[text=red] at (-1,0) ×;
[text=red] at (0.6,0) ×;
[text=black,anchor=north] at (-2,0) -t;
[text=black,anchor=north] at (2,0) t;
[text=black,anchor=north] at (-1,0) s_1;
[text=black,anchor=north] at (0.6,0) s_2;
+
[baseline=0]
[black] (-2,0) – (2,0);
[text=red] at (-1,0) ×;
[text=red] at (0.6,0) ×;
[text=black,anchor=north] at (-2,0) -t;
[text=black,anchor=north] at (2,0) t;
[text=black,anchor=north] at (-1,0) s_1;
[text=black,anchor=north] at (0.6,0) s_2;
[-] (-1.5,0) to[bend left=75] (-0.2,0);
[text=black,anchor=north] at (-1.5,0) τ_1;
[text=black,anchor=north] at (-0.2,0) τ_2;
+
[baseline=0]
[black] (-2,0) – (2,0);
[text=red] at (-1,0) ×;
[text=red] at (0.6,0) ×;
[text=black,anchor=north] at (-2,0) -t;
[text=black,anchor=north] at (2,0) t;
[text=black,anchor=north] at (-1,0) s_1;
[text=black,anchor=north] at (0.6,0) s_2;
[-] (-1.5,0) to[bend left=75] (-0.5,0);
[-] (0.1,0) to[bend left=75] (1.2,0);
[text=black,anchor=north] at (-1.5,0) τ_1;
[text=black,anchor=north] at (-0.5,0) τ_2;
[text=black,anchor=north] at (0.1,0) τ_3;
[text=black,anchor=north] at (1.2,0) τ_4;
+
[baseline=0]
[black] (-2,0) – (2,0);
[text=red] at (-1,0) ×;
[text=red] at (0.6,0) ×;
[text=black,anchor=north] at (-2,0) -t;
[text=black,anchor=north] at (2,0) t;
[text=black,anchor=north] at (-1,0) s_1;
[text=black,anchor=north] at (0.6,0) s_2;
[-] (-1.5,0) to[bend left=75] (0.1,0);
[-] (-0.5,0) to[bend left=75] (1.2,0);
[text=black,anchor=north] at (-1.5,0) τ_1;
[text=black,anchor=north] at (-0.5,0) τ_2;
[text=black,anchor=north] at (0.1,0) τ_3;
[text=black,anchor=north] at (1.2,0) τ_4;
+ …
In the diagrammatic equation, the location of the cross marks, given by , are fixed.
On the right hand side, τ's are integration variables.
Note that a time ordering operator 𝒯 in the definition of 𝒰_0^(k)(τ, ) is required to guarantee that the operators are applied in the correct order.
Each arc represents a two-point correlation function B(τ_j, τ_j') in the bath influence functional ℒ_b.
The equation <ref> is ready for computation.
One can directly apply the Monte Carlo method to the right-hand side to approximate the sum of integrals,
which is known as the bare diagrammatic quantum Monte Carlo method (bare dQMC).
To design a more efficient approach,
we will follow the method in <cit.> to derive an integro-differential equation.
We first generalize the definition of 𝒢^(k)(-t,s,t)
to 𝒢^(k)(s_, , s_) for any s_ < s_:
𝒢^(k)(s_,s,s_)
= ( ∏_n=1^N √( (s_n)))
∑_M=0^∞∫_s_⩽τ⩽ s_( ∏_m=1^M (τ_m) )
𝒰_0^(k)(s_,τ,s,s_) ℒ_b^(k)(τ)
τ
where is an increasing sequence of time points, each of which is between s_ and s_, and
𝒰_0^(k)(s_,τ,s,s_)
=𝒯[V_s,I^(k)(s_1) … V_s,I^(k)(s_N) W_s,I^(k)(τ_1) … W_s,I^(k)(τ_M) O_s^(k)(0)],
if 0∈[s_,s_],
𝒯[V_s,I^(k)(s_1) … V_s,I^(k)(s_N) W_s,I^(k)(τ_1) … W_s,I^(k)(τ_M)],
if 0∉[s_,s_].
Note that only operators between s_ and s_ are included in the definition.
Therefore, when [s_, s_] does not include the origin,
O_s(0) should be excluded.
This definition can also be represented diagramatically as <ref>,
only with -t replaced by s_ and t replaced by s_.
It can then be seen that for two intervals satisfying [s_, s_] ⊂ [s_', s_'],
𝒢^(k)(s_, , s_) can be understood as a proportion of 𝒢^(k)(s_', ', s_') if is the subvector of ' with all components between s_ and s_.
To formulate an integro-differential equation for 𝒢^(k)(s_, , s_),
we extend the gray line from s_ to s_ by a length of ds (see the left-hand side of <ref>).
Then in the expansion of the extended gray line,
all diagrams on the right-hand side of <ref> are included.
Besides, diagrams that are not included in <ref> are thin lines with arcs ending within the interval [s_, s_ + ds] (second line in <ref>).
Since ds is infinitesimal,
it suffices to assume that there is only one time point inside [s_, s_ + ds].
We can further assume that this time point is fixed at s_,
and then this diagram must be multiplied by ds when being added to the sum (third to fifth lines of <ref>).
For simplicity,
we will name the arc ending at s_ as 𝒜_s_ (thick black arcs in <ref>).
We can now categorize all the diagrams with a point at s_ into classes characterized by the connected component of the arcs including the arc 𝒜_s_.
Here the “connected component” can be established by beginning with a set including the arc (τ_k, τ_M) only,
and then expanding the set iteratively by including all arcs with intersections with any arc that is already in the set, until the set does not change.
In <ref>,
two categories are labeled by yellow and green backgrounds,
and the connected components are highlighted using thick lines (including both black and white lines).
For all diagrams with the same connected component including 𝒜_s_,
we can sum them up and the result is the connection of a few thick lines with all arcs in this connected component,
which is known as a “bold diagram”.
The derivation is summarized in the following diagrammatic equation:
[baseline=0,scale=0.8]
[fill=lightgray] (-2,-0.05) rectangle (2.2,0.05);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (2.8,0.1) s_+ s;
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
=
[baseline=0,scale=0.8]
[fill=lightgray] (-2,-0.05) rectangle (2,0.05);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
+ s (
[baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-,very thick] (0,0) to[bend left=75] (2,0);
+ [baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-,very thick] (0,0) to[bend left=75] (2,0);
[-] (0.5,0) to[bend left=75] (1.5,0);
+ [baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-,very thick] (0,0) to[bend left=75] (2,0);
[-] (0.5,0) to[bend left=75] (1.5,0);
[-] (-1.1,0) to[bend left=75] (-0.3,0);
+ [baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-,very thick] (0,0) to[bend left=75] (2,0);
[-,double] (-1.75,0) to[bend left=75] (0.25,0);
+ [baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-,very thick] (0,0) to[bend left=75] (2,0);
[-,double] (-1.75,0) to[bend left=75] (0.25,0);
[-] (0.5,0) to[bend left=75] (1.5,0);
+ [baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-,very thick] (0,0) to[bend left=75] (2,0);
[-,double] (-1.75,0) to[bend left=75] (0.25,0);
[-] (0.5,0) to[bend left=75] (1.5,0);
[-] (-1.1,0) to[bend left=75] (-0.3,0);
+ …)
= [baseline=0,scale=0.8]
[fill=lightgray] (-2,-0.05) rectangle (2,0.05);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
+ s (
[baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[fill=lightgray] (-2,-0.05) rectangle (-0.012,0.05);
[fill=lightgray] (0.012,-0.05) rectangle (2,0.05);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-,very thick] (0,0) to[bend left=75] (2,0);
(tl) at (0,-0.8) τ_1;
[->] (tl) – (0,-0.05);
+
[baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[fill=lightgray] (-2,-0.05) rectangle (-1.75-0.012,0.05);
[fill=lightgray] (-1.75+0.012,-0.05) rectangle (0-0.012,0.05);
[fill=lightgray] (0+0.012,-0.05) rectangle (0.25-0.012,0.05);
[fill=lightgray] (0.25+0.012,-0.05) rectangle (2,0.05);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-,very thick] (0,0) to[bend left=75] (2,0);
[-,double] (-1.75,0) to[bend left=75] (0.25,0);
(tl) at (-1.75,-0.8) τ_1;
[->] (tl) – (-1.75,-0.05);
(tl) at (-0.2,-0.8) τ_2;
[->] (tl) – (0,-0.05);
(tl) at (0.5,-0.8) τ_3;
[->] (tl) – (0.25,-0.05);
+ …)
where the notations τ's are omitted in some diagrams without ambiguity.
The mathematical formulae of the bold diagrams can be easily read off.
For example, the bold diagram with the yellow background should be interpreted as
∫_s_^s_τ_1
( (τ_1) (s_))
W_s^(k)(s_)
𝒢^(k)(τ_1, s_1 ,s_)
W_s^(k)(τ_1)
𝒢^(k)(s_, s_0 ,τ_1)
B^(k)(τ_1,s_),
where s_0,s_1 are subsequences of s
such that (s_0,τ_1,s_1) is an ascending sequence and s = (s_0,s_1),
and the bold diagram with the green background reads
∫_s_^s_τ_1 τ_2 τ_3
( (τ_1) (τ_2) (τ_3) (s_))
W_s^(k)(s_)
𝒢^(k)(τ_3, s_3 ,s_)
W_s^(k)(τ_1)
𝒢^(k)(τ_2, s_2 ,τ_3)
W_s^(k)(τ_1)
𝒢^(k)(τ_1, s_1 ,τ_2)
W_s^(k)(τ_1)
𝒢^(k)(s_, s_0 ,τ_1)
B^(k)(τ_1,τ_3) B^(k)(τ_2,s_)
where s_0,s_1,s_2,_3 are subsequences of s such that
(s_0,τ_1,s_1,τ_2,s_2,τ_3,s_3) is an ascending sequence and s = (s_0,s_1,s_2,s_3) .
The explicit expression of the diagrammatic equation (<ref>) is as follows:
𝒢^(k)(s_, , s_ + ds) =
𝒢^(k)(s_, , s_)
+
𝒦^(k)(s_, , s_) ds,
where 𝒦^(k)(s_, , s_) is the sum of bold diagrams inside the parentheses in <ref>:
𝒦^(k)(s_,s,s_)
=
[baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[fill=lightgray] (-2,-0.05) rectangle (-0.012,0.05);
[fill=lightgray] (0.012,-0.05) rectangle (2,0.05);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-] (0,0) to[bend left=75] (2,0);
+
[baseline=0,scale=0.8]
[black] (-2,0) – (2,0);
[fill=lightgray] (-2,-0.05) rectangle (-1.75-0.012,0.05);
[fill=lightgray] (-1.75+0.012,-0.05) rectangle (0-0.012,0.05);
[fill=lightgray] (0+0.012,-0.05) rectangle (0.25-0.012,0.05);
[fill=lightgray] (0.25+0.012,-0.05) rectangle (2,0.05);
[text=red] at (-1.4,0) ×;
[text=red] at (-0.6,0) ×;
[text=red] at (1,0) ×;
[text=black,anchor=north] at (-2,0) s_;
[text=black,anchor=north] at (2,0) s_;
[gray] (2,0.05) – (2,-0.05);
[text=black,anchor=north] at (-1.4,0) s_1;
[text=black,anchor=north] at (-0.6,0) s_2;
[text=black,anchor=north] at (0.2,-0.1) …;
[text=black,anchor=north] at (1,0) s_N;
[-] (0,0) to[bend left=75] (2,0);
[-] (-1.75,0) to[bend left=75] (0.25,0);
+ …
The integro-differential equation of 𝒢^(k)(s_, , s_) can then be derived as
𝒢^(k)(s_,s,s_)s_
= 𝒦^(k)(s_,,s_).
For the purpose of easier implementation,
we will also provide the mathematical expression of 𝒦^(k)(s_, , s_).
The general form of 𝒦^(k)(s_, , s_) is
𝒦^(k)(s_,s,s_)
= ∑_M=1
M is odd^∞∫_s_⩽τ_1 ⩽…⩽τ_M⩽ s_τ_1 …τ_M
( ∏_m=1^M+1 (τ_m) ) W_s^(k)(s_)
𝒰^(k)(s_,τ,s,s_) ℒ_b^c(k)(τ),
where τ = (τ_1,…,τ_M,τ_M+1) and τ_M+1 = s_.
The system-associated operator 𝒰^(k) is defined by
𝒰^(k)(s_,τ,s,s_)
= 𝒢^(k)(τ_M, _M, s_) W_s^(k)(τ_M) 𝒢^(k)(τ_M-1, _M-1, τ_M) W_s^(k)(τ_M-1) ⋯ W_s^(k)(τ_1) 𝒢^(k)(s_, _0, τ_1)
with _0, ⋯, _M being subsequences of such that = (_0, _1, ⋯, _M)
and the extended sequence
(s_, _0, τ_1, _1, τ_2, ⋯, τ_M-1, _M, τ_M)
is increasing.
This indicates that _0, ⋯, _M are subsequences of separated by τ_1, ⋯, τ_M.
The bath influence functional ℒ_b^c(k) is exactly the same as the bath influence functional in <cit.>:
ℒ_b^c(k)(τ_1,…,τ_M+1)
= ∑_𝔮∈𝒬_M+1^c∏_(j,j')∈𝔮
B(τ_j,τ_j')
where 𝒬_M+1^c is the set of connected diagrams.
For example,
𝒬_2^c = {{(1,2)}},
𝒬_4^c
= {{(1,3),(2,4)}},
𝒬_6^c
= {{(1,3),(2,5),(4,6)},
{(1,4),(2,5),(3,6)},
{(1,4),(2,6),(3,5)},
{(1,5),(2,4),(3,6)}}.
One may refer to <cit.> for more information about the set 𝒬_M+1^c.
In general, the number of pairings in 𝒬_M+1^c is asymptotically ^-1 M!! when M is a large odd integer <cit.>.
For fixed s_ and , solving the integro-differential equation <ref> requires an initial condition at s_ = s_N (or s_ = s_ if is an empty sequence). By definition, it can be immediately seen that
𝒢^(k)(s_, s_ = s_) = 𝕀^(k), if s_≠ 0,
𝒢^(k)(s_, s_1, ⋯, s_N, s_ = s_N) = √( (s_N)) V_s,I^(k)(s_N) 𝒢^(k)(s_, s_1,⋯,s_N-1, s_ = s_N), if s_N ≠ 0.
Due to the observable O_s^(k) appearing in the definition of 𝒢^(k),
there is a discontinuity when any of the time points touches zero.
The jump condition needed in the computation is
lim_s_→ 0^+𝒢^(k)(s_,s_1,…,s_N,s_)
= O_s^(k)lim_s_→ 0^-𝒢^(k) (s_,s_1,…,s_N,s_).
By these conditions,
all the full propagators 𝒢^(k)(s_, , s_) can be uniquely determined.
To solve the integro-differential equation (<ref>) numerically,
we start with solving all 𝒢^(k)(s_, s_), i.e. N = 0,
and then increase the length of iteratively.
Such an order guarantees that the initial condition <ref> can be applied whenever needed.
When solving 𝒢^(k)(s_, , s_) for fixed s_ and ,
the second-order Heun's method is applied,
and the jump condition <ref> must be applied when s_ crosses zero.
For the series of integrals on the right-hand side of <ref>,
we select an odd positive integer M̅ and truncate the series up to M = M̅ as an approximation.
In our experiments,
the value of M̅ is at most 5,
and therefore the integrals in <ref> are computed numerically using the second-order composite trapezoidal rule.
If larger M̅ needs to be used,
one can use Monte Carlo methods to approximate the integrals, leading to the inchworm Monte Carlo method as introduced in <cit.>.
To save computational cost, we have also utilized the following property of the full propagators: for all T > 0,
𝒢^(k)(s_+T,s_1+T,…,s_N+T,s_+T)
= ^- H_s T𝒢^(k)(s_,s_1,…,s_N,s_)
^ H_s T,
if s_>0;
𝒢^(k)(s_-T,s_1-T,…,s_N-T,s_-T)
= ^- H_s T𝒢^(k)(s_,s_1,…,s_N,s_)
^ H_s T,
if s_<0.
Note that the property holds only when all the time points are on the same side of the origin.
§ RESUMMATION OF THE FULL PROPAGATOR
Using the algorithm introduced in the previous section,
we are able to compute all the gray lines in <ref>.
In this section,
we will propose a fast algorithm to sum up all the diagrams.
Before introducing the algorithm,
we first note that the same gray line for the same spin can sometimes be used multiple times during the summation.
For example,
in the 4-spin case,
when the propagator 𝒢^(4)(-t, s_1, t) is computed for the fourth spin,
it can be applied in the following terms, all of which appear in <ref>:
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.6) – (-0.5,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
, [baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.6) ×;
[text=red] at (-0.5,0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.6) – (-0.5,0.2);
[text=red] at (0.3,-0.2) ×;
[text=red] at (0.3,-0.6) ×;
[black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
, [baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,0.2) ×;
[text=red] at (-0.5,-0.2) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2) – (-0.5,-0.2);
[text=red] at (0.3,-0.2) ×;
[text=red] at (0.3,-0.6) ×;
[black, densely dotted, line width = 1pt] (0.3,-0.2) – (0.3,-0.6);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
, [baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6);
[text=red] at (0.3,0.6) ×;
[text=red] at (0.3,0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.6) – (0.3,0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
, [baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.25) rectangle (1,-0.15);
[fill=lightgray] (-1,-0.65) rectangle (1,-0.55);
[text=red] at (-0.5,-0.2) ×;
[text=red] at (-0.5,-0.6) ×;
[black, densely dotted, line width = 1pt] (-0.5,-0.2) – (-0.5,-0.6);
[text=red] at (0.3,0.2) ×;
[text=red] at (0.3,-0.2) ×;
[black, densely dotted, line width = 1pt] (0.3,0.2) – (0.3,-0.2);
[text=black,anchor=north] at (-1,-0.6) -t;
[text=black,anchor=north] at (1,-0.6) t;
[text=black,anchor=north] at (-0.5,-0.6) s_1;
[text=black,anchor=north] at (0.3,-0.6) s_2;
.
Instead of applying <ref> directly to compute the summation, we will follow the idea of the modular path integral <cit.> to assemble all the gray lines by adding spins iteratively.
Assuming that we want to add up all the five diagrams in <ref>.
Notice that the terms related to the last spin are essentially the same in all these diagrams.
Therefore, instead of computing all the diagrams,
a more efficient way is to apply the distributive law to separate the last spin and only add up the terms for the first three spins.
Similarly, when dealing with the sum involving the first three spins,
the first and the second diagrams in <ref> can be combined;
the third and the fifth diagrams in <ref> can also be combined.
In general,
to deal with the sum on the right-hand side of <ref>,
we can first separate all the diagrams in to groups according to the number of crosses on the last line.
Then, for each of the groups,
we further separate the diagrams into subgroups according to the crosses on third line.
For each of the subgroups,
we apply such grouping one more time according to the crosses on the second line.
When performing computations,
we first sum up the terms involving only the first spin in all the smallest groups.
For the result of each group,
we multiply them by the corresponding term related to the second spin,
and then repeat a similar procedure for rest of the spins.
Mathmatically, this idea is based on the following iterative representation of the observable:
G^[1](-t,s,t) =
_s^(1)(
ρ_s,I^(1)(t)
𝒢^(1)(-t,s,t) );
G^[k+1](-t,s,t)
= ∑_N' = 0^∞∫_-t⩽s'⩽ t
G^[k](-t,s',t)
_s^(k+1)(
ρ_s,I^(k+1)(t)
𝒢^(k+1)(-t,𝒫(s,s'),t)
s'),
for k = 1,…,n-2;
G^[K](-t,t) = ∑_N=0^∞∫_-t⩽s⩽ t
G^[K-1](-t,s,t)
_s^(K)(
ρ_s,I^(K)(t)
𝒢^(K)(-t,s,t) )
s
where s'=(s'_1,…,s'_N') and
s=(s_1,…,s_N) are two non-descending lists.
In <ref>, 𝒫 is the sorting operator to merge s and s' into a sorted list.
We start from the first spin with <ref>, add the middle spins by <ref> and close the diagram by <ref>.
These equations show that there are many duplicate computations in the procedure above,
which can be avoided.
The details of the final algorithm will again be illustrated using diagrams below.
The computation of (<ref>) is straightforward.
We start our discussion with the case j = 1 in <ref>,
which becomes
G^[2](-t,s,t)
= ∑_N' = 0^∞∫_-t⩽s'⩽ t
G^[1](-t,𝒫(s,s'),t)
_s^(2)( ρ_s,I^(2)(t) 𝒢^(2)(-t,s',t) )
s'.
If s has length 1,
the equation can be diagrammatically represented by
G^[2](-t,s_1,t) =
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[black, line width=1.5pt] (-1,0.65-0.4) – (-1,0.15-0.4);
[black, line width=1.5pt] (+1,0.65-0.4) – (+1,0.15-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
=
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.2,0.6-0.4) ×;
[text=red] at (0.2,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (0.2,0.6-0.4) – (0.2,0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.1,0.6-0.4) ×;
[text=red] at (0.1,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (0.1,0.6-0.4) – (0.1,0.2-0.4);
[text=red] at (-0.8,0.6-0.4) ×;
[text=red] at (-0.8,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (-0.8,0.6-0.4) – (-0.8,0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.1,0.6-0.4) ×;
[text=red] at (0.1,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (0.1,0.6-0.4) – (0.1,0.2-0.4);
[text=red] at (-0.8,0.6-0.4) ×;
[text=red] at (-0.8,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (-0.8,0.6-0.4) – (-0.8,0.2-0.4);
[text=red] at (0.7,0.6-0.4) ×;
[text=red] at (0.7,0.2-0.4) ×;
[black, densely dotted, line width = 1pt] (0.7,0.6-0.4) – (0.7,0.2-0.4);
+ ….
On the left-hand side,
the diagram represents the quantity G^[2](-t,s,t) where the two short black lines binding the bold lines indicate that all connections between the first two spins are taken into account.
The parameter s is shown as the cross on the second spin.
We use an open dashed line to indicate that it will be connected to the third spin in the next step.
The right-hand side of the equation represents the sum and the integral in <ref>.
The four diagrams represent the terms for N' = 0,1,2,3,4, respectively.
Similarly, if the length of s is 2, we have the following diagrammatic equation:
G^[2](-t,s_1,s_2,t) =
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[black, line width=1.5pt] (-1,0.65-0.4) – (-1,0.15-0.4);
[black, line width=1.5pt] (+1,0.65-0.4) – (+1,0.15-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.5,0.2-0.4) – (0.5,-0.2-0.4);
=
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.5,0.2-0.4) – (0.5,-0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.2,0.6-0.4) ×;
[text=red] at (0.2,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.2,0.6-0.4) – (0.2,0.2-0.4);
[text=red] at (0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.5,0.2-0.4) – (0.5,-0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.1,0.6-0.4) ×;
[text=red] at (0.1,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.1,0.6-0.4) – (0.1,0.2-0.4);
[text=red] at (-0.8,0.6-0.4) ×;
[text=red] at (-0.8,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.8,0.6-0.4) – (-0.8,0.2-0.4);
[text=red] at (0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.5,0.2-0.4) – (0.5,-0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55-0.4) rectangle (1,0.65-0.4);
[fill=lightgray] (-1,0.15-0.4) rectangle (1,0.25-0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.1,0.6-0.4) ×;
[text=red] at (0.1,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.1,0.6-0.4) – (0.1,0.2-0.4);
[text=red] at (-0.8,0.6-0.4) ×;
[text=red] at (-0.8,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.8,0.6-0.4) – (-0.8,0.2-0.4);
[text=red] at (0.7,0.6-0.4) ×;
[text=red] at (0.7,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.7,0.6-0.4) – (0.7,0.2-0.4);
[text=red] at (0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.5,0.2-0.4) – (0.5,-0.2-0.4);
+ ….
After computing the values of G^[2](-t,s,t)
for all s,
we can move forward to adding the third spin into the diagram.
An example for N=3 is
G^[3](-t,s_1,s_2,s_3,t) =
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.25);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.25);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.2,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4);
[text=red] at (0.7,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.7,0.2-0.4) – (0.7,-0.2-0.4);
=
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.25+0.4);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.25+0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.2,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4);
[text=red] at (0.7,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.7,0.2-0.4) – (0.7,-0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.25+0.4);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.25+0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.2,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4);
[text=red] at (0.7,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.7,0.2-0.4) – (0.7,-0.2-0.4);
[text=red] at (0.4,0.2) ×;
[black, densely dotted, line width=1pt] (0.4,0.2) – (0.4,-0.2);
[text=red] at (0.4,0.2-0.4) ×;
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.25+0.4);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.25+0.4);
[text=red] at (-0.5,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.2,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4);
[text=red] at (0.7,0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.7,0.2-0.4) – (0.7,-0.2-0.4);
[text=red] at (0.4,0.2) ×;
[black, densely dotted, line width=1pt] (0.4,0.2) – (0.4,-0.2);
[text=red] at (0.4,0.2-0.4) ×;
[text=red] at (-0.2,0.2) ×;
[black, densely dotted, line width=1pt] (-0.2,0.2) – (-0.2,-0.2);
[text=red] at (-0.2,0.2-0.4) ×;
+ ⋯
We then repeat this process recurrently until we add the second last spin into the diagram. This completes the computation of <ref>.
To add the last spin, <ref> is applied instead of <ref>.
The only difference is that there are no further spins so that the time sequence s in G^[K](-t,s,t) can only be an empty list,
which will then be simply denoted by G^[K](-t,t).
Diagrammatically, in the 4-spin case, the last step can be represented by
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.55) rectangle (1,-0.65);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.65);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.65);
=
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[fill=lightgray] (-1,-0.55) rectangle (1,-0.65);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.25);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.25);
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[fill=lightgray] (-1,-0.55) rectangle (1,-0.65);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.25);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.25);
[text=red] at (-0.5,0.2-0.4) ×;
[text=red] at (-0.5,-0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[fill=lightgray] (-1,-0.55) rectangle (1,-0.65);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.25);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.25);
[text=red] at (-0.5,0.2-0.4) ×;
[text=red] at (-0.5,-0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.2,0.2-0.4) ×;
[text=red] at (0.2,-0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4);
+
[baseline=0]
[fill=lightgray] (-1,0.55) rectangle (1,0.65);
[fill=lightgray] (-1,0.15) rectangle (1,0.25);
[fill=lightgray] (-1,-0.15) rectangle (1,-0.25);
[fill=lightgray] (-1,-0.55) rectangle (1,-0.65);
[black, line width=1.5pt] (-1,0.65) – (-1,-0.25);
[black, line width=1.5pt] (+1,0.65) – (+1,-0.25);
[text=red] at (-0.5,0.2-0.4) ×;
[text=red] at (-0.5,-0.2-0.4) ×;
[black, densely dotted, line width=1pt] (-0.5,0.2-0.4) – (-0.5,-0.2-0.4);
[text=red] at (0.2,0.2-0.4) ×;
[text=red] at (0.2,-0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.2,0.2-0.4) – (0.2,-0.2-0.4);
[text=red] at (0.7,0.2-0.4) ×;
[text=red] at (0.7,-0.2-0.4) ×;
[black, densely dotted, line width=1pt] (0.7,0.2-0.4) – (0.7,-0.2-0.4);
+ …
Additionally, the quantity of the left hand side is exactly the quantity O_s = (ρ_I(t) G(-t,t)).
In practical simulations, it is impossible to consider an infinite number of diagrams.
Instead, a sufficiently large integer N̅ is chosen as the maximum number of interactions between any spin and its neighboring spins.
Diagrammatically, N̅ corresponds to the maximum number of red crosses on each line. Furthermore, as depicted in <ref>, each diagram corresponds to an integral over a simplex,
which is approximated using the composite trapezoidal quadrature rule in our numerical implementation.
Recall that the integro-differential equation is also solved using a second-order method.
The overall convergence rate of our method is second order.
Here we would like to comment the relation and difference between modular path integral (MPI) proposed in <cit.> and our approach.
Both methods compute the Ising chain dynamics iteratively based on the connection of spins.
MPI utilizes QuAPI for the computation of a single spin dynamics
while our method uses the Inchworm algorithm.
Another significant difference between two methods is that MPI considers all possible connections between spins
given a specific time discretization
while our method, instead, introduces a cut off for the spin couplings.
With the cut-off,
it is possible to reduce the number of diagrams
and hence improve the computational efficiency.
§ ESTIMATION OF THE COMPUTATIONAL COST
In this section,
we estimate the computational cost for our method.
As discussed above, the computation contains two parts,
including the computation of all bold lines with red crosses for all the spins (<ref>)
and the summation of the full propagators (<ref>).
For simplicity,
a uniform time step is chosen throughout the computation.
All the discrete time points are therefore multiples of .
Below we will estimate the cost for computing G(-t,t) for t=,2,…,L given a positive integer L.
§.§ Computational cost for each spin
The integro-differential equation (<ref>) shows that the computation of longer diagrams depends on the knowledge of shorter diagrams.
To compute G(-t,t) for t up to L,
the maximum length of the diagrams is 2L.
For any l = 1,⋯,2L, we can then assume that all the diagrams of length less than l Δ t are already computed,
and focus on the diagrams of length l Δ t.
For fixed l, the computational costs for all diagrams of length lΔ t are generally the same.
The most costly part is the computation of 𝒢^(k)(s_,s,s_) in <ref>.
Taking the forward Euler method as an example,
we need to evaluate 𝒦^(k)(s_,s,s_ + (l-1)) to obtain 𝒢^(k)(s_,s,s_ + l ).
According to <ref>,
the computational cost can be estimated by
∑_M=1
M is odd^M̅ C_M ()0pt0M+lM,
where the binomial coefficient ()0pt2M+lM is the number of grid points in the M-dimensional simplex s_⩽⩽ s_ + (l-1), and C_M is the computational cost of the integrand.
Note that this estimation is based on the grid-based numerical quadrature,
which does not apply to Monte Carlo methods.
For large M,
the computation of the bath influence functional becomes dominant since the number of diagrams increases as 𝒪(M!!),
so that C_M can be estimated by 𝒪((M+2)!!).
In our tests,
M̅ is no more than 5.
Hence, we will regard C_M as a constant for simplicity.
With the cost of each diagram estimated by <ref>,
we now need to calculate the number of diagrams of length l.
The estimation of the computational cost starts from the number of different bold-lines with total length l
for l=1,…,2L.
When l ⩽ L,
the interval [s_,s_] may or may not contain the origin 0.
With the <ref>,
if 0∉[s_,s_], we may apply the shift invariant property to reduce the number of diagrams.
Since each spin has at most N̅ couplings,
the total number of different diagrams with length l⩽ L is
∑_N=0^N̅ (2L+1-l)
()0pt0N+lN
=(2L+1-l) ()0pt0N̅+l+1N̅
where the factor 2L+1-l is the number of different choices of s_,
namely, s_ = -L, (-L+1), …, (L-l),
and the binomial coefficient ()0pt2N+lN
represents the different choices of N spin interactions
on the set {s_, s_ + ,…, s_+l}.
Practically, when 0∉[s_,s_],
the translation relation <ref> can be applied for the reduction of diagrams.
However, the reduction does not change the order of the estimated cost.
Therefore, for the single-spin full propagators of all lengths, the computational cost is estimated by
∑_l=1^2L (2L+1-l) ()0pt0N̅+l+1N̅∑_M=1
M is odd^M̅
C_M ()0pt0M+lM
⩽ ∑_l=1^2L (2L+1-l)
()0pt0N̅+l+1N̅C_M̅(M̅+1)/2()0pt0M̅+lM̅
≲ M̅C_M̅ L
∑_l=1^2L l^N̅ l^M̅≲ L^M̅+N̅+2
where M̅, N̅ are relatively small in practice and are regarded as constants in the above estimation.
For a spin chain with K spins,
the computational cost should be multiplied by K if all spins have different parameters.
§.§ Computational cost for the summation
We now estimate the summation of diagrams described in <ref>.
Note that in this step,
we only need to use the values of 𝒢^(k)(s_,s,s_) with -s_=s_=l,
so that the total number of diagrams involved is much less than the previous step.
We now consider the computation of G^[k+1](-t,,t) with = (s_1, …, s_N) and t = l according to <ref>.
Recall that we have the values of 𝒢^(k+1)(-t,𝒫(,'), t) only for N + N' ⩽N̅ (see the text about the truncation before <Ref>).
The series (<ref>) should be truncated up to N' = N̅ in the computation.
As a result,
the computational cost of <ref> is
∑_N'=0^N̅-N()0pt02l+N'N'
= ()0pt01+2l+N̅-NN̅-N,
where the binomial coefficient on the left-hand side is the number of grid points in the N'-dimensional simplex.
Since we need to evaluate G^[k+1](-t,,t) for on all the grid points of an N-dimensional simplex,
and N ranges from 0 to N̅,
we have the following estimation of the total computational cost:
∑_N=0^N̅()0pt02l+NN()0pt01+2l+N̅-NN̅-N≲∑_N=0^N̅
l^N l^N̅-N≲N̅ l^N̅.
Finally,
to compute observables on all time steps l = 1,…,L,
the time complexity is then 𝒪(L^N̅+1).
Compared to the solver of the inchworm equation,
the computational cost of the summation is relatively small.
Hence, the total computational cost remains at 𝒪(L^M̅+N̅+2)
as analyzed in <ref>.
§.§ Numerical verification
In agreement with our analysis,
our numerical experiments (to be presented in detail in <ref>) also show that the computational cost of the summation is nearly negligible compared with the solver of the inchworm equation.
Therefore, to verify our estimation of the computational cost,
we will focus only on the analysis in <ref>.
A convenient way to check the time complexity is to count the number of evaluations of the bath influence functional ℒ_b^(c)(), which depends only on L, M̅, N̅ and is independent of all other parameters.
Results for M̅ = 1, N̅ = 1 and M̅ = 3, N̅ = 2 with different values of L are plotted in <ref>.
It can be clearly seen that when L gets larger, the trend of growth agrees better with our analysis.
In general,
this estimation of the computational cost is the same as direct path-integral methods such as the summation of the Dyson series.
However,
the use of bold lines can significantly accelerate the convergence of the series,
resulting in a much smaller M̅ needed in the simulation.
The time complexity 𝒪(L^M̅ + N̅ + 2) shows that reducing M̅ has a great impact on the computational cost,
especially for large values of L.
We would like to comment that in the algorithm,
the most time-consuming step is the evaluation of 𝒢^(k)(s_, , s_).
To reduce the computational time,
multithreading is implemented to parallelize the computation.
In general,
according to the structure of the inchworm equation (<ref>),
the value of 𝒢^(k)(s_, , s_) for shorter is needed to obtain the full propagator for longer .
Therefore, we first compute 𝒢^(k)(s_, ∅, s_) for all s_,s_,
and the solve 𝒢^(k)(s_, s_1, s_) for all s_, s_1, s_, followed by the computation of 𝒢^(k)(s_, s_1,s_2, s_) for all s_, s_1, s_2, s_, and so forth until the maximum length of is reached.
The computations of 𝒢^(k)(s_, ∅, s_) and 𝒢^(k)(s_, s_1, s_) are carried out sequentially.
When the length of in 𝒢^(k)(s_, , s_) is greater than or equal to 2,
the algorithm is parallelized.
The parallelization is based on the fact that the inchworm equations (<ref>) for 𝒢^(k)(s_, s_1, …, s_N, s_) can actually be decoupled.
Precisely speaking,
the propagator 𝒢^(k)(s_', s_1', …, s_N', s_') can appear on the right-hand side of <ref> for = (s_1, …, s_N) only when s_k' = s_k for all k = 1,…,N,
and in this case,
we have s_' = τ_m and s_' = τ_m+1 for a certain m.
If 0 ∉(s_', s_'),
the value of 𝒢^(k)(s_', s_1', …, s_N', s_') (or 𝒢^(k)(τ_m, s_1, …, s_N, τ_m+1)) is actually obtained from <ref>.
Therefore, 𝒢^(k)(s_, s_1, …, s_N, s_) and 𝒢^(k)(s_', s_1', …, s_N', s_') are coupled only if there exists T such that s_j' = s_j + T for all j = 1,⋯,N.
This allows decoupling of equations according to the vector (s_2 - s_1, …, s_N - s_N-1), and thus the algorithm can be parallelized.
In fact, when 0 ∈ (s_1, s_N),
the equations of 𝒢^(k)(s_, , s_) are decoupled simply for different values of , since the translational relation <ref> cannot be applied. Using this structure helps with better distribution of computational cost across the threads.
§ NUMERICAL EXPERIMENTS
In this section, we evaluate our newly-proposed method using several numerical examples.
To begin with, we introduce the parameters used for the numerical tests.
For the coupling intensity between spins, the operator V^(k) simply a scaled Pauli matrix:
V^(k) = J^(k)σ_z^(k)
where J^(k) indicates the coupling intensity between the kth spin and its neighboring spins.
The observable is chosen to be O_s = σ_z^(k) for k=1,…,K, respectively.
In <ref>, the two point correlation functions B^(k)(τ_1,τ_2) are set to be the same for every k:
B^(k)(τ_j,τ_j') = B^*(Δτ)
=1/π∫_0^∞
J(ω)
[
(βω/2) cos(ωΔ t)
- sin(ωΔ t)
] ω
where Δτ = |τ_j | - |τ_j'| and J(ω) is the spectral density of the harmonic oscillators in the bath.
In this paper, we set it to be the Ohmic spectral density:
J(ω) = π/2∑_l=1^L c_l^2/ω_lδ(ω - ω_l)
where L is the number of harmonic oscillators and is set to be 400 in all our tests.
The coupling intensity c_l and frequency of each harmonic oscillator ω_l are given by
ω_l = -ω_c ln(1- l/L[1-exp(-ω_max/ω_c)]),
c_l = ω_l √(ξω_c/L[1-exp(-ω_max/ω_c)]).
The values of the parameters, including the Kondo parameter ξ, the primary frequency of the harmonic oscillators ω_c, and the maximum frequency ω_max, will be given later for each experiment.
In addition to the above physical parameters,
three numerical parameters need to be specified to carry out the simulation,
including two truncation parameters (M̅ for system-bath couplings and N̅ for interspin couplings) and the time step .
The convergence of the numerical results with respect to these parameters will be studied in the following subsection.
§.§ Convergence tests
This section carries out experiments on three convergence parameters, M̅,N̅ and ,
among which M̅,N̅ are two truncation parameters and stands for the time step.
In this section, all spins in the spin chain are prepared in the state |+1⟩.
In other words, ς^(k) = +1 for k=1,…,K in <ref>.
In the spin-boson model with a single spin,
the convergence with respect to the parameter M̅ has been studied numerically in <cit.>,
where it was shown that the convergence of the inchworm method was much faster than the Dyson series.
Here we will carry out a numerical test for the convergence of M̅ by considering a 5-spin system. We choose the time step to be = 0.2.
Other parameters are chosen as follows:
ξ = 0.2,
β = 5,
ω_c = 2.5,
ω_max = 4ω_c, N̅ = 2,
ϵ^(k) = 1,
Δ^(k) = 1,
J^(k) = 0.2,
∀ k = 1,…,5.
Our numerical results are given in <ref>,
which shows the evolution of ⟨σ_z^(k)⟩ for k = 1,⋯,5.
Note that due to the symmetry of the spin chain system,
we have σ_z^(1)(t) = σ_z^(5)(t) and σ_z^(2)(t) = σ_z^(4)(t) for all t,
and therefore only three figures are shown in <ref>.
These figures show fast convergence with respect M̅ for this set of parameters,
due to the use of the inchworm method.
The curves for M̅ = 3 and M̅ = 5 are almost on top of each other,
while some slight differences can be observed for the computation with M̅ = 1,
which is less accurate.
We now fix M̅ and consider the convergence with respect to N̅.
We again consider a chain of 5 spins
and choose the time step to be = 0.2.
Other parameters are
ξ = 0.2,
β = 5,
ω_c = 2.5,
ω_max = 4ω_c, M̅ = 3,
ϵ^(k) = 0,
Δ^(k) = 1,
J^(k) = 0.5,
∀ k = 1,…,5.
The results for N̅=2,3,4,5 are shown in <ref>.
In general,
due to the numerical sign problem,
for longer-time simulations,
larger values of N̅ are needed to obtain accurate results.
For the first and the last spins,
since they are coupled only with one neighboring spin,
the results of N̅ = 3 already show good quality until t = 5.
For the remaining three spins,
the results for N̅ = 4 and N̅ = 5 almost coincide,
showing the convergence for the coupling intensity J^(k) = 0.5 up to t = 5.
Further increasing N̅ does not significantly improve the results.
Additionally, the convergence test is also carried out for the time step , with the parameters of the 5-spin Ising chain being
ξ = 0.2,
β = 5,
ω_c = 2.5,
ω_max = 4ω_c,
M̅ = 3,
N̅ = 2
ϵ^(k) = 1,
Δ^(k) = 1,
J^(k) = 0.2,
∀ k = 1,…,5.
We perform simulations for the time step being 0.4, 0.2, 0.1, and 0.05 and present the results in <ref>.
Note that for M̅ = 3 and N̅ = 2,
according to our analysis in <ref>,
the computational cost is estimated by 𝒪(L^7) with L being the total number of time steps.
Therefore, to save computational time,
we run the simulation only up to t = 3.
It can be observed that for our second-order numerical method,
the time step Δ t = 0.2 can give sufficiently accurate results.
Such a time step will be taken for all the simulations in the following subsections.
§.§ Numerical tests for different coupling intensities
In this section, we conduct numerical experiments to examine the effects of varying coupling intensities between spins.
We again consider the 5-spin Ising chain with the following parameters:
ξ = 0.2,
β = 5,
ω_c = 2.5,
ω_max = 4ω_c,
M̅ = 3,
N̅ = 4
ϵ^(k) = 1,
Δ^(k) = 1,
∀ k = 1,…,5.
As mentioned previously,
the time step is chosen as Δ t = 0.2, which is sufficient to guarantee a small truncation error.
We again set J^(k) to be the same for all k = 1,…,5.
Three values J^(k) = 0.2, 0.4, 0.6 are considered in our experiments,
and the results are given in <ref>
given that all spins are initially in the state |ς^(k)⟩ = |+1⟩ for all k.
Again, our results correctly reflect the symmetry of the Ising chain,
and therefore only three lines are plotted in each figure.
For the purpose of comparison, we also include the result for J^(k) = 0,
meaning that all the spins are decoupled.
In this case,
the evolution of the observable is identical for all the spins,
and they are the same as the spin-boson model studied in <cit.>.
Generally, for higher coupling intensity J^(k),
the discrepancy between spins is more significant,
and they differ more from the decoupled case.
In particular,
when J^(k) = 0,
all the curves coincide as predicted.
It can also be observed that the curve for the first and the last spin is more separated from the other three spins,
especially in the initial stage of the dynamics.
This is due to the fact that the two spins at the ends of the chain interact only with one spin instead of two.
In all cases,
the interaction between the spin and the bath causes smaller amplitude of the fluctuation as the system evolves.
Additionally, we also carry out an experiment where the first spin is initially at the state |ς^(1)⟩ = |-1⟩ and all other spins have the initial state |ς^(k)⟩ = |+1⟩ for k=2,…,5.
Such a spin chain is no longer symmetric.
The evolution of the observable ⟨σ_z^(k)(t) ⟩ is plot in <ref>.
In this experiment, when J^(k)=0,
Spins 2 to 5 are physically identical,
so there are only two distinct curves in the figure.
For non-zero coupling intensities between spins,
it is clear that the behavior of the first spin is affected by the other spins.
The local minimum of the blue curves around t = 2.2 is obviously higher when the coupling intensity J^(k) gets larger.
Similar to <ref>,
the separation of the curves for Spins 2 to 5 also gets clearer for stronger coupling between spins.
§.§ Simulation of a long Ising chain
This section aims to study the behavior of a long spin chain, in which the middle part can mimic the behavior of an infinite Ising chain,
and meanwhile, one can observe the end effects.
We consider an Ising chain comprising of 50 spins and 100 spins, respectively.
The parameters of all the spins are set to be the same.
Under such settings,
we anticipate observing very similar behaviours for the spins near the center of the chain.
Note that in our method, if the spins and baths have the same physical parameters,
the computational cost grows only linearly as the number of spins increases.
The parameters used in this experiment are
ξ = 0.2,
β = 5,
ω_c = 2.5,
ω_max = 4ω_c,
M̅ = 3,
N̅ = 4
ϵ^(k) = 0,
Δ^(k) = 1,
J^(k) = 0.5,
∀ k = 1,…,K.
with K=50 or K=100.
The time step is chosen as = 0.2.
For comparison,
we also carry out the experiments for the same parameters with K=1 and K=5.
Since all spins have the same parameters,
the inchworm equation needs to be solved only once.
For longer spin chains, more computational cost is needed for the for the summation of full propagators.
But even so, according to our analysis in <Ref>,
the summation only takes a small proportion of the computational time.
Our numerical results are presented in <ref>.
In general, the case of a single spin is clearly different from the interacting spin chains,
while the three spin chains show very similar behaviors.
Due to the end effect,
the first and the last spins have a slightly higher flipping frequency.
Between the third and the third last spin,
the curves for all spins are indistinguishable in the plots,
and in this example, the five-spin case can already well represent a long spin chain.
§ CONCLUSION AND DISCUSSION
We proposed a method to simulate an Ising chain coupled with harmonic baths.
The algorithm is derived by two steps: firstly,
the Dyson series decompose the system into spin-boson units
and the problem is also decomposed to a single spin problem;
secondly, the inchworm algorithm is applied
to evaluate the evolution of spin-boson units with special “crosses” representing the spin-spin couplings.
The algorithm leads to the sum of diagrams.
A special order for the summation based on distributive law is then proposed for faster evaluation of the sum,
which accelerates the computation.
Under this special order for the summation,
the most time consuming step is the computation for a single spin-boson unit.
The computational cost is then estimated by 𝒪(L^M̅+N̅+2)
where L is the number of time steps
and M̅,N̅ are two truncation parameters for the series expansions.
Numerical experiments are carried out to validate our method.
While this paper focuses mainly on the Ising chain coupled with harmonic baths,
similar idea can be migrated to more complicated interacting systems in a way similar to <cit.>.
Also, since our approach can be regarded as a perturbation theory,
it is mainly applicable for short-time simulations.
Long-time simulations can be made possible by truncation of the memory kernel like the iterative QuAPI method.
These will be considered in our future works.
abbrv
|
Subsets and Splits